Nov 22 10:37:57 crc systemd[1]: Starting Kubernetes Kubelet... Nov 22 10:37:57 crc restorecon[4741]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:57 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:58 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 10:37:59 crc restorecon[4741]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 22 10:38:00 crc kubenswrapper[4772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 10:38:00 crc kubenswrapper[4772]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 22 10:38:00 crc kubenswrapper[4772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 10:38:00 crc kubenswrapper[4772]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 10:38:00 crc kubenswrapper[4772]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 22 10:38:00 crc kubenswrapper[4772]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.900136 4772 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904393 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904409 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904415 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904419 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904422 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904426 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904431 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904435 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904440 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904445 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904449 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904453 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904458 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904462 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904465 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904469 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904473 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904476 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904480 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904483 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904487 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904490 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904495 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904499 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904503 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904506 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904510 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904513 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904517 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904520 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904524 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904528 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904531 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904535 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904539 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904542 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904546 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904549 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904553 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904558 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904562 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904567 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904572 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904576 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904580 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904584 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904589 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904592 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904596 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904601 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904605 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904609 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904612 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904616 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904620 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904623 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904627 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904630 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904633 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904637 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904640 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904644 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904647 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904650 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904655 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904658 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904662 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904666 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904670 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904674 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.904678 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904750 4772 flags.go:64] FLAG: --address="0.0.0.0" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904758 4772 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904764 4772 flags.go:64] FLAG: --anonymous-auth="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904771 4772 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904777 4772 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904781 4772 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904789 4772 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904794 4772 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904798 4772 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904803 4772 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904807 4772 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904811 4772 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904815 4772 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904819 4772 flags.go:64] FLAG: --cgroup-root="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904824 4772 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904828 4772 flags.go:64] FLAG: --client-ca-file="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904832 4772 flags.go:64] FLAG: --cloud-config="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904836 4772 flags.go:64] FLAG: --cloud-provider="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904840 4772 flags.go:64] FLAG: --cluster-dns="[]" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904845 4772 flags.go:64] FLAG: --cluster-domain="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904849 4772 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904853 4772 flags.go:64] FLAG: --config-dir="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904857 4772 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904862 4772 flags.go:64] FLAG: --container-log-max-files="5" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904867 4772 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904871 4772 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904875 4772 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904880 4772 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904884 4772 flags.go:64] FLAG: --contention-profiling="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904888 4772 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904892 4772 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904897 4772 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904901 4772 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904906 4772 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904911 4772 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904915 4772 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904919 4772 flags.go:64] FLAG: --enable-load-reader="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904923 4772 flags.go:64] FLAG: --enable-server="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904927 4772 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904932 4772 flags.go:64] FLAG: --event-burst="100" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904936 4772 flags.go:64] FLAG: --event-qps="50" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904941 4772 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904945 4772 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904949 4772 flags.go:64] FLAG: --eviction-hard="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904954 4772 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904958 4772 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904962 4772 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904966 4772 flags.go:64] FLAG: --eviction-soft="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904970 4772 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904974 4772 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904978 4772 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904982 4772 flags.go:64] FLAG: --experimental-mounter-path="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904986 4772 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904990 4772 flags.go:64] FLAG: --fail-swap-on="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.904994 4772 flags.go:64] FLAG: --feature-gates="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905032 4772 flags.go:64] FLAG: --file-check-frequency="20s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905037 4772 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905041 4772 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905061 4772 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905067 4772 flags.go:64] FLAG: --healthz-port="10248" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905071 4772 flags.go:64] FLAG: --help="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905075 4772 flags.go:64] FLAG: --hostname-override="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905079 4772 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905083 4772 flags.go:64] FLAG: --http-check-frequency="20s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905088 4772 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905093 4772 flags.go:64] FLAG: --image-credential-provider-config="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905097 4772 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905103 4772 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905107 4772 flags.go:64] FLAG: --image-service-endpoint="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905111 4772 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905116 4772 flags.go:64] FLAG: --kube-api-burst="100" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905121 4772 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905126 4772 flags.go:64] FLAG: --kube-api-qps="50" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905130 4772 flags.go:64] FLAG: --kube-reserved="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905134 4772 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905138 4772 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905142 4772 flags.go:64] FLAG: --kubelet-cgroups="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905147 4772 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905151 4772 flags.go:64] FLAG: --lock-file="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905154 4772 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905158 4772 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905162 4772 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905168 4772 flags.go:64] FLAG: --log-json-split-stream="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905172 4772 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905176 4772 flags.go:64] FLAG: --log-text-split-stream="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905180 4772 flags.go:64] FLAG: --logging-format="text" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905184 4772 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905189 4772 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905193 4772 flags.go:64] FLAG: --manifest-url="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905197 4772 flags.go:64] FLAG: --manifest-url-header="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905202 4772 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905206 4772 flags.go:64] FLAG: --max-open-files="1000000" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905218 4772 flags.go:64] FLAG: --max-pods="110" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905223 4772 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905227 4772 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905231 4772 flags.go:64] FLAG: --memory-manager-policy="None" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905235 4772 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905239 4772 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905243 4772 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905248 4772 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905256 4772 flags.go:64] FLAG: --node-status-max-images="50" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905260 4772 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905265 4772 flags.go:64] FLAG: --oom-score-adj="-999" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905270 4772 flags.go:64] FLAG: --pod-cidr="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905274 4772 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905281 4772 flags.go:64] FLAG: --pod-manifest-path="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905285 4772 flags.go:64] FLAG: --pod-max-pids="-1" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905289 4772 flags.go:64] FLAG: --pods-per-core="0" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905293 4772 flags.go:64] FLAG: --port="10250" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905297 4772 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905301 4772 flags.go:64] FLAG: --provider-id="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905305 4772 flags.go:64] FLAG: --qos-reserved="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905309 4772 flags.go:64] FLAG: --read-only-port="10255" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905313 4772 flags.go:64] FLAG: --register-node="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905317 4772 flags.go:64] FLAG: --register-schedulable="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905321 4772 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905328 4772 flags.go:64] FLAG: --registry-burst="10" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905331 4772 flags.go:64] FLAG: --registry-qps="5" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905336 4772 flags.go:64] FLAG: --reserved-cpus="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905340 4772 flags.go:64] FLAG: --reserved-memory="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905345 4772 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905349 4772 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905353 4772 flags.go:64] FLAG: --rotate-certificates="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905357 4772 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905361 4772 flags.go:64] FLAG: --runonce="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905365 4772 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905369 4772 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905374 4772 flags.go:64] FLAG: --seccomp-default="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905377 4772 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905382 4772 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905386 4772 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905390 4772 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905394 4772 flags.go:64] FLAG: --storage-driver-password="root" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905398 4772 flags.go:64] FLAG: --storage-driver-secure="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905403 4772 flags.go:64] FLAG: --storage-driver-table="stats" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905407 4772 flags.go:64] FLAG: --storage-driver-user="root" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905411 4772 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905420 4772 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905424 4772 flags.go:64] FLAG: --system-cgroups="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905429 4772 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905435 4772 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905439 4772 flags.go:64] FLAG: --tls-cert-file="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905443 4772 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905448 4772 flags.go:64] FLAG: --tls-min-version="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905452 4772 flags.go:64] FLAG: --tls-private-key-file="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905456 4772 flags.go:64] FLAG: --topology-manager-policy="none" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905460 4772 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905464 4772 flags.go:64] FLAG: --topology-manager-scope="container" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905468 4772 flags.go:64] FLAG: --v="2" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905473 4772 flags.go:64] FLAG: --version="false" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905478 4772 flags.go:64] FLAG: --vmodule="" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905483 4772 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905487 4772 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905579 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905583 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905588 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905592 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905596 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905599 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905603 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905607 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905610 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905614 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905617 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905620 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905624 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905627 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905636 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905640 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905646 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905649 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905653 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905656 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905660 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905663 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905667 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905725 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905729 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905733 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905737 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905740 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905744 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905747 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905751 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905754 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905758 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905763 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905767 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905771 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905774 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905778 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905782 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905786 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905790 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905794 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905798 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905802 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905805 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905809 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905817 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905822 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905827 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905831 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905835 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905838 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905842 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905845 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905849 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905852 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905856 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905859 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905863 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905866 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905870 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905874 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905877 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905880 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905884 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905888 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905893 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905896 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905900 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905904 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.905908 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.905919 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.932130 4772 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.932187 4772 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932318 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932332 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932341 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932351 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932360 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932368 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932376 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932384 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932393 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932401 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932409 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932417 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932425 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932434 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932442 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932450 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932458 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932465 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932473 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932481 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932489 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932497 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932505 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932512 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932520 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932528 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932535 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932543 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932551 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932560 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932568 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932575 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932583 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932591 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932599 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932609 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932622 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932631 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932641 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932651 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932665 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932679 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932689 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932700 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932709 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932720 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932729 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932739 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932749 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932759 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932768 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932776 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932784 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932792 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932800 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932809 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932817 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932825 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932839 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932849 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932857 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932865 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932873 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932880 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932888 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932895 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932904 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932911 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932918 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932927 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.932934 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.932947 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933271 4772 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933289 4772 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933299 4772 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933309 4772 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933317 4772 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933325 4772 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933333 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933342 4772 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933349 4772 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933357 4772 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933365 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933373 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933384 4772 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933393 4772 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933402 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933412 4772 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933421 4772 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933429 4772 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933437 4772 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933446 4772 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933454 4772 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933462 4772 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933470 4772 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933478 4772 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933485 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933494 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933502 4772 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933510 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933517 4772 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933525 4772 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933533 4772 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933541 4772 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933551 4772 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933560 4772 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933569 4772 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933577 4772 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933585 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933597 4772 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933607 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933616 4772 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933625 4772 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933634 4772 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933642 4772 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933650 4772 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933658 4772 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933666 4772 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933675 4772 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933683 4772 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933691 4772 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933699 4772 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933706 4772 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933714 4772 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933722 4772 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933729 4772 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933737 4772 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933745 4772 feature_gate.go:330] unrecognized feature gate: Example Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933753 4772 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933761 4772 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933769 4772 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933777 4772 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933784 4772 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933792 4772 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933800 4772 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933807 4772 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933815 4772 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933823 4772 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933833 4772 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933843 4772 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933852 4772 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933860 4772 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 10:38:00 crc kubenswrapper[4772]: W1122 10:38:00.933868 4772 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.933880 4772 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.934987 4772 server.go:940] "Client rotation is on, will bootstrap in background" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.941437 4772 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.941568 4772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.943575 4772 server.go:997] "Starting client certificate rotation" Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.943628 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.943806 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-14 18:02:47.388370844 +0000 UTC Nov 22 10:38:00 crc kubenswrapper[4772]: I1122 10:38:00.943900 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.071998 4772 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.076299 4772 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.080712 4772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.183330 4772 log.go:25] "Validated CRI v1 runtime API" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.268560 4772 log.go:25] "Validated CRI v1 image API" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.279084 4772 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.286154 4772 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-22-10-33-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.286212 4772 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.314837 4772 manager.go:217] Machine: {Timestamp:2025-11-22 10:38:01.311245257 +0000 UTC m=+1.550689781 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:856a4d77-e4e0-4420-9e80-7c5223144311 BootID:902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1c:41:ca Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1c:41:ca Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:be:0d:55 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a5:6e:a3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ba:96:d1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:57:cd:e6 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:be:9a:24 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:2e:9b:cd:d5:4f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:06:6e:db:9d:79:46 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.315140 4772 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.315318 4772 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.315703 4772 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.315889 4772 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.315925 4772 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.316219 4772 topology_manager.go:138] "Creating topology manager with none policy" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.316233 4772 container_manager_linux.go:303] "Creating device plugin manager" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.317513 4772 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.317549 4772 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.317733 4772 state_mem.go:36] "Initialized new in-memory state store" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.317833 4772 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.325693 4772 kubelet.go:418] "Attempting to sync node with API server" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.325728 4772 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.325758 4772 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.325775 4772 kubelet.go:324] "Adding apiserver pod source" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.325788 4772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.335853 4772 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.337281 4772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.338750 4772 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 22 10:38:01 crc kubenswrapper[4772]: W1122 10:38:01.350427 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.350530 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:01 crc kubenswrapper[4772]: W1122 10:38:01.350436 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.350597 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.350932 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.350967 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.350978 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.350988 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351003 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351015 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351030 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351083 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351098 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351109 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351124 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351133 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351160 4772 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351712 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351793 4772 server.go:1280] "Started kubelet" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.351865 4772 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.353343 4772 server.go:460] "Adding debug handlers to kubelet server" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.353452 4772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 22 10:38:01 crc systemd[1]: Started Kubernetes Kubelet. Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.354122 4772 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.355715 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.355759 4772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.355953 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:51:40.170003542 +0000 UTC Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.356000 4772 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 676h13m38.814006093s for next certificate rotation Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.356138 4772 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.356151 4772 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.356187 4772 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.356189 4772 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.356772 4772 factory.go:55] Registering systemd factory Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.356796 4772 factory.go:221] Registration of the systemd container factory successfully Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.358534 4772 factory.go:153] Registering CRI-O factory Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.358774 4772 factory.go:221] Registration of the crio container factory successfully Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.359015 4772 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.359235 4772 factory.go:103] Registering Raw factory Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.359993 4772 manager.go:1196] Started watching for new ooms in manager Nov 22 10:38:01 crc kubenswrapper[4772]: W1122 10:38:01.359241 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.360973 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.359147 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.370782 4772 manager.go:319] Starting recovery of all containers Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.372090 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a4dec4c015332 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 10:38:01.351746354 +0000 UTC m=+1.591190858,LastTimestamp:2025-11-22 10:38:01.351746354 +0000 UTC m=+1.591190858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378374 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378430 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378445 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378462 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378475 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378487 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378499 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378511 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378526 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378537 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378547 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378560 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378575 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378589 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378602 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378617 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378630 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378641 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378653 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378665 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378677 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378692 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378705 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378729 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378741 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378755 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378771 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378785 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378796 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378807 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378821 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378832 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378843 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378855 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378868 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378879 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378891 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378902 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378914 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378926 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378939 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378952 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.378989 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379001 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379013 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379025 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379037 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379071 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379084 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379098 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379111 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379123 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379140 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379153 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379167 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379179 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379193 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379204 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379216 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379230 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379241 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379252 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379264 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379277 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379288 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379305 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379317 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379330 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379342 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379355 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379367 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379380 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379391 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379403 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379417 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379428 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379444 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379457 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379470 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379483 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379496 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379510 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379521 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379533 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379547 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379560 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379572 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379584 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379597 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379609 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379621 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379634 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379647 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379659 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379670 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379684 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379703 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379719 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379731 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379743 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379755 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.379768 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383295 4772 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383325 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383341 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383362 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383377 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383392 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383407 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383420 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383433 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383454 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383468 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383482 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383497 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383509 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383522 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383535 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383548 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383562 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383576 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383586 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383598 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383611 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383623 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383636 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383648 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383662 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383674 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383687 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383706 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383719 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383732 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383744 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383756 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383769 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383783 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383795 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383807 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383820 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383832 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383843 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383860 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383873 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383885 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383897 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383908 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383920 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383932 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383944 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383956 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383969 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383982 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.383994 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384006 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384018 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384031 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384043 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384085 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384101 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384297 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384310 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384324 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384347 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384358 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384370 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384381 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384392 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384406 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384420 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384433 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384447 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384459 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384473 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384489 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384501 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384513 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384526 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384538 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384551 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384563 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384577 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384589 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384601 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384615 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384626 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384638 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384650 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384663 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384675 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384688 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384700 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384712 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384724 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384736 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384747 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384759 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384772 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384785 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384798 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384810 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384822 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384835 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384848 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384860 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384873 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384891 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384905 4772 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384917 4772 reconstruct.go:97] "Volume reconstruction finished" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.384925 4772 reconciler.go:26] "Reconciler: start to sync state" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.388354 4772 manager.go:324] Recovery completed Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.405474 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.407552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.407594 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.407604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.408686 4772 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.408707 4772 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.408744 4772 state_mem.go:36] "Initialized new in-memory state store" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.410332 4772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.412158 4772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.412211 4772 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.412238 4772 kubelet.go:2335] "Starting kubelet main sync loop" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.412349 4772 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 22 10:38:01 crc kubenswrapper[4772]: W1122 10:38:01.413472 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.413573 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.456905 4772 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.477728 4772 policy_none.go:49] "None policy: Start" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.478708 4772 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.478752 4772 state_mem.go:35] "Initializing new in-memory state store" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.512483 4772 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.545555 4772 manager.go:334] "Starting Device Plugin manager" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.545599 4772 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.545610 4772 server.go:79] "Starting device plugin registration server" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.546144 4772 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.546204 4772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.546362 4772 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.546514 4772 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.546522 4772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.556193 4772 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.562409 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.646551 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.648339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.648405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.648424 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.648467 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.649223 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.713325 4772 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.713416 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.714476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.714504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.714512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.714613 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.715207 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.715231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.715239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.715818 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.715849 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.715877 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.715984 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716028 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716767 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716958 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717012 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.716657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717479 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717835 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.717861 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.718829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.718848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.718857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.718958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.718968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.718975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719101 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719154 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.719652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790255 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790340 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790363 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790380 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790395 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790411 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790426 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790440 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790613 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790717 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790756 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790802 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.790818 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.850331 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.852277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.852382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.852410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.852484 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.853634 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892667 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892689 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892710 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892731 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892757 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892780 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892804 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892803 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892838 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892877 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892898 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892922 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892957 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892921 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.892942 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.894363 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.894454 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.894535 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.894595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.894644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895243 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895297 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895390 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895522 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895564 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895592 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: I1122 10:38:01.895566 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 10:38:01 crc kubenswrapper[4772]: E1122 10:38:01.963789 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.044016 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.066834 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.097476 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.125590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.131872 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.188215 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-07413cae231db45f83b31caf75575b20e01e977400da038f2e3dea3dcefddc0a WatchSource:0}: Error finding container 07413cae231db45f83b31caf75575b20e01e977400da038f2e3dea3dcefddc0a: Status 404 returned error can't find the container with id 07413cae231db45f83b31caf75575b20e01e977400da038f2e3dea3dcefddc0a Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.189448 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b6c7ed25985ec907f596d2d0c294fb6d6b2d5d36e2f6c47c7eb54d37d92fb341 WatchSource:0}: Error finding container b6c7ed25985ec907f596d2d0c294fb6d6b2d5d36e2f6c47c7eb54d37d92fb341: Status 404 returned error can't find the container with id b6c7ed25985ec907f596d2d0c294fb6d6b2d5d36e2f6c47c7eb54d37d92fb341 Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.199939 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1149ff9cbf71356308fd9c1d30de679e4d09732104d67d3d3293a2d2e788cf5e WatchSource:0}: Error finding container 1149ff9cbf71356308fd9c1d30de679e4d09732104d67d3d3293a2d2e788cf5e: Status 404 returned error can't find the container with id 1149ff9cbf71356308fd9c1d30de679e4d09732104d67d3d3293a2d2e788cf5e Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.253871 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.255546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.255597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.255632 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.255665 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 10:38:02 crc kubenswrapper[4772]: E1122 10:38:02.256539 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.259858 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fc25daf8de3ac23864aa099bca3983f4be668bae8cebada03f6f2518cb36b889 WatchSource:0}: Error finding container fc25daf8de3ac23864aa099bca3983f4be668bae8cebada03f6f2518cb36b889: Status 404 returned error can't find the container with id fc25daf8de3ac23864aa099bca3983f4be668bae8cebada03f6f2518cb36b889 Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.260866 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8a0e7e203abb6fa918753b3b5f222192d0739a17895d0670993d4fbfab42dbe7 WatchSource:0}: Error finding container 8a0e7e203abb6fa918753b3b5f222192d0739a17895d0670993d4fbfab42dbe7: Status 404 returned error can't find the container with id 8a0e7e203abb6fa918753b3b5f222192d0739a17895d0670993d4fbfab42dbe7 Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.352568 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.417066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1149ff9cbf71356308fd9c1d30de679e4d09732104d67d3d3293a2d2e788cf5e"} Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.417903 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b6c7ed25985ec907f596d2d0c294fb6d6b2d5d36e2f6c47c7eb54d37d92fb341"} Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.418571 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"07413cae231db45f83b31caf75575b20e01e977400da038f2e3dea3dcefddc0a"} Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.419764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8a0e7e203abb6fa918753b3b5f222192d0739a17895d0670993d4fbfab42dbe7"} Nov 22 10:38:02 crc kubenswrapper[4772]: I1122 10:38:02.421020 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fc25daf8de3ac23864aa099bca3983f4be668bae8cebada03f6f2518cb36b889"} Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.477983 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:02 crc kubenswrapper[4772]: E1122 10:38:02.478159 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:02 crc kubenswrapper[4772]: E1122 10:38:02.764756 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.793447 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:02 crc kubenswrapper[4772]: E1122 10:38:02.793586 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.813415 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:02 crc kubenswrapper[4772]: E1122 10:38:02.813505 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:02 crc kubenswrapper[4772]: W1122 10:38:02.921825 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:02 crc kubenswrapper[4772]: E1122 10:38:02.921932 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:03 crc kubenswrapper[4772]: I1122 10:38:03.057172 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:03 crc kubenswrapper[4772]: I1122 10:38:03.058605 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:03 crc kubenswrapper[4772]: I1122 10:38:03.058668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:03 crc kubenswrapper[4772]: I1122 10:38:03.058691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:03 crc kubenswrapper[4772]: I1122 10:38:03.058734 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 10:38:03 crc kubenswrapper[4772]: E1122 10:38:03.059521 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Nov 22 10:38:03 crc kubenswrapper[4772]: I1122 10:38:03.198503 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 22 10:38:03 crc kubenswrapper[4772]: E1122 10:38:03.199429 4772 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:03 crc kubenswrapper[4772]: I1122 10:38:03.353095 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:04 crc kubenswrapper[4772]: I1122 10:38:04.353644 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:04 crc kubenswrapper[4772]: E1122 10:38:04.366273 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Nov 22 10:38:04 crc kubenswrapper[4772]: I1122 10:38:04.660413 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:04 crc kubenswrapper[4772]: I1122 10:38:04.661952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:04 crc kubenswrapper[4772]: I1122 10:38:04.661985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:04 crc kubenswrapper[4772]: I1122 10:38:04.661995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:04 crc kubenswrapper[4772]: I1122 10:38:04.662018 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 10:38:04 crc kubenswrapper[4772]: E1122 10:38:04.662604 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Nov 22 10:38:04 crc kubenswrapper[4772]: W1122 10:38:04.764518 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:04 crc kubenswrapper[4772]: E1122 10:38:04.764624 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:04 crc kubenswrapper[4772]: W1122 10:38:04.810801 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:04 crc kubenswrapper[4772]: E1122 10:38:04.810915 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:04 crc kubenswrapper[4772]: E1122 10:38:04.990605 4772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a4dec4c015332 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 10:38:01.351746354 +0000 UTC m=+1.591190858,LastTimestamp:2025-11-22 10:38:01.351746354 +0000 UTC m=+1.591190858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 10:38:05 crc kubenswrapper[4772]: W1122 10:38:05.001924 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:05 crc kubenswrapper[4772]: E1122 10:38:05.002140 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.352416 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.429990 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3"} Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.430083 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012"} Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.431592 4772 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689" exitCode=0 Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.431662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689"} Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.431699 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.432767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.432796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.432807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.433679 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dd3ff2ca46da5da5de5605b38081b6b04f2104135e99bf5261f42492280d96fa" exitCode=0 Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.433771 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dd3ff2ca46da5da5de5605b38081b6b04f2104135e99bf5261f42492280d96fa"} Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.433798 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.434777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.434808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.434832 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.436334 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf" exitCode=0 Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.436424 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf"} Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.436491 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.438199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.438247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.438266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.440678 4772 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd" exitCode=0 Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.440724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd"} Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.440805 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.442098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.442145 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.442158 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.443090 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.444937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.444976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:05 crc kubenswrapper[4772]: I1122 10:38:05.444992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:05 crc kubenswrapper[4772]: W1122 10:38:05.960653 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:05 crc kubenswrapper[4772]: E1122 10:38:05.960837 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.353658 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.445213 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b77dd244ddad5f9ae9f977382b910123d1cfd80687600c51b036a8090eb14551"} Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.445268 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.446203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.446236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.446248 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.447075 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="82af55d2dab893ddab5c953bf4ee9c4491d79b9a683a7e6e983eed714972ff07" exitCode=0 Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.447123 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"82af55d2dab893ddab5c953bf4ee9c4491d79b9a683a7e6e983eed714972ff07"} Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.447207 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.448428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.448470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.448482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.449560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347"} Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.451598 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698"} Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.454301 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33"} Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.454339 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11"} Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.454463 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.455324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.455361 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.455377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.579681 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.580361 4772 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 22 10:38:06 crc kubenswrapper[4772]: I1122 10:38:06.580458 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.138185 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.342336 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 22 10:38:07 crc kubenswrapper[4772]: E1122 10:38:07.344183 4772 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.352933 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.460292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495"} Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.460365 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa"} Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.462404 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d"} Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.465081 4772 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ad8c93bdef8b86bcb77687cd0ef089a953e6814981a7268418200198d5df941c" exitCode=0 Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.465190 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ad8c93bdef8b86bcb77687cd0ef089a953e6814981a7268418200198d5df941c"} Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.465294 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.465295 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.465300 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.466812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.466862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.466887 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.466827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.466975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.467003 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.467420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.467457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.467471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:07 crc kubenswrapper[4772]: E1122 10:38:07.567669 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.863137 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.865021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.865089 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.865106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:07 crc kubenswrapper[4772]: I1122 10:38:07.865138 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 10:38:07 crc kubenswrapper[4772]: E1122 10:38:07.865784 4772 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Nov 22 10:38:08 crc kubenswrapper[4772]: I1122 10:38:08.353773 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:08 crc kubenswrapper[4772]: I1122 10:38:08.473623 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de"} Nov 22 10:38:08 crc kubenswrapper[4772]: I1122 10:38:08.473881 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:08 crc kubenswrapper[4772]: I1122 10:38:08.475539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:08 crc kubenswrapper[4772]: I1122 10:38:08.475585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:08 crc kubenswrapper[4772]: I1122 10:38:08.475603 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.352761 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.480577 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb1b92ae002effbfc025d22c10082116ba7137912cd2ba09c0753defd5c50343"} Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.480634 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"91944314de5fb702efd0ac83da330d5a2a494afc70ae98e4a0b45bd095d22281"} Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.485474 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7"} Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.485611 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.488595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.488638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:09 crc kubenswrapper[4772]: I1122 10:38:09.488648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:09 crc kubenswrapper[4772]: W1122 10:38:09.673273 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:09 crc kubenswrapper[4772]: E1122 10:38:09.673446 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.352608 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.490613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59"} Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.490711 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.492099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.492144 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.492160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.496315 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0b438ca04c6fabe1485816b42146cb3187c37a93a06ce564cb7a50100e44288b"} Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.496780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"950d9a7b382476d6b5d6e463560635a3871de005dbf974cba3094a3a758d042b"} Nov 22 10:38:10 crc kubenswrapper[4772]: W1122 10:38:10.588446 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:10 crc kubenswrapper[4772]: E1122 10:38:10.588810 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.648170 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.648486 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.649616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.649696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.649767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:10 crc kubenswrapper[4772]: I1122 10:38:10.780985 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:11 crc kubenswrapper[4772]: W1122 10:38:11.086925 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:11 crc kubenswrapper[4772]: E1122 10:38:11.087354 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.353143 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.502889 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b4f8870ad28ab8562db15a8048e96cd5ec5b3eef59c69b5dcfadb89210723afb"} Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.503549 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.502980 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.502973 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.504894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.504943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.504954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.505124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.505238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:11 crc kubenswrapper[4772]: I1122 10:38:11.505355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:11 crc kubenswrapper[4772]: E1122 10:38:11.557008 4772 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 10:38:12 crc kubenswrapper[4772]: W1122 10:38:12.016865 4772 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:12 crc kubenswrapper[4772]: E1122 10:38:12.016946 4772 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.073036 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.353239 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.507839 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.509447 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59" exitCode=255 Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.509673 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.510537 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.511121 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59"} Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.511538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.511580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.511599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.512747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.512785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.512802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:12 crc kubenswrapper[4772]: I1122 10:38:12.513520 4772 scope.go:117] "RemoveContainer" containerID="62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.514737 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.516310 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.516939 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.517310 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216"} Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.517693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.517725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.517736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.518475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.518504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.518513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:13 crc kubenswrapper[4772]: I1122 10:38:13.706530 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.266630 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.268657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.268710 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.268725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.268759 4772 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.519439 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.519634 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.520152 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.520525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.520544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.520553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.521747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.521847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.521856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.635836 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.973382 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.973672 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.975327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.975371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:14 crc kubenswrapper[4772]: I1122 10:38:14.975384 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.093704 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.521384 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.522119 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.522328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.522366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.522382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.522864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.522911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.522940 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:15 crc kubenswrapper[4772]: I1122 10:38:15.696781 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 22 10:38:16 crc kubenswrapper[4772]: I1122 10:38:16.523537 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:16 crc kubenswrapper[4772]: I1122 10:38:16.527702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:16 crc kubenswrapper[4772]: I1122 10:38:16.527876 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:16 crc kubenswrapper[4772]: I1122 10:38:16.528032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.283997 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.284792 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.286354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.286425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.286444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.289470 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.525707 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.527770 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.527944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:17 crc kubenswrapper[4772]: I1122 10:38:17.528116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:18 crc kubenswrapper[4772]: I1122 10:38:18.094414 4772 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 10:38:18 crc kubenswrapper[4772]: I1122 10:38:18.094527 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 10:38:21 crc kubenswrapper[4772]: E1122 10:38:21.558078 4772 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 10:38:22 crc kubenswrapper[4772]: I1122 10:38:22.780423 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 22 10:38:22 crc kubenswrapper[4772]: I1122 10:38:22.780727 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:22 crc kubenswrapper[4772]: I1122 10:38:22.800014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:22 crc kubenswrapper[4772]: I1122 10:38:22.800104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:22 crc kubenswrapper[4772]: I1122 10:38:22.800115 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:22 crc kubenswrapper[4772]: I1122 10:38:22.817280 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.354347 4772 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.542204 4772 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.543755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.543847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.543880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.740470 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.740577 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.745630 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 10:38:23 crc kubenswrapper[4772]: I1122 10:38:23.745740 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 10:38:24 crc kubenswrapper[4772]: I1122 10:38:24.646664 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]log ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]etcd ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/generic-apiserver-start-informers ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/priority-and-fairness-filter ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-apiextensions-informers ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-apiextensions-controllers ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/crd-informer-synced ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-system-namespaces-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 22 10:38:24 crc kubenswrapper[4772]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 22 10:38:24 crc kubenswrapper[4772]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/bootstrap-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/start-kube-aggregator-informers ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/apiservice-registration-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/apiservice-discovery-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]autoregister-completion ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/apiservice-openapi-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 22 10:38:24 crc kubenswrapper[4772]: livez check failed Nov 22 10:38:24 crc kubenswrapper[4772]: I1122 10:38:24.646727 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:38:25 crc kubenswrapper[4772]: I1122 10:38:25.235519 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 22 10:38:25 crc kubenswrapper[4772]: I1122 10:38:25.235621 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.094912 4772 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.095011 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.288307 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.288412 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 22 10:38:28 crc kubenswrapper[4772]: E1122 10:38:28.736459 4772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.737942 4772 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.754241 4772 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.754505 4772 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.754730 4772 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.755931 4772 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.756454 4772 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.758108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.758239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.758315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.758395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.758457 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:28Z","lastTransitionTime":"2025-11-22T10:38:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.772026 4772 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.772171 4772 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.787156 4772 csr.go:261] certificate signing request csr-xg4wb is approved, waiting to be issued Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.795892 4772 csr.go:257] certificate signing request csr-xg4wb is issued Nov 22 10:38:28 crc kubenswrapper[4772]: E1122 10:38:28.844374 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.850287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.850330 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.850340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.850361 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.850371 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:28Z","lastTransitionTime":"2025-11-22T10:38:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:28 crc kubenswrapper[4772]: E1122 10:38:28.865746 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.871446 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.871512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.871527 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.871557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.871573 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:28Z","lastTransitionTime":"2025-11-22T10:38:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:28 crc kubenswrapper[4772]: E1122 10:38:28.922600 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.926108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.926163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.926185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.926208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.926224 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:28Z","lastTransitionTime":"2025-11-22T10:38:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:28 crc kubenswrapper[4772]: E1122 10:38:28.937738 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.940951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.940988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.940999 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.941020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.941032 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:28Z","lastTransitionTime":"2025-11-22T10:38:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:28 crc kubenswrapper[4772]: E1122 10:38:28.950546 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:28Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:28 crc kubenswrapper[4772]: E1122 10:38:28.950876 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.952619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.952650 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.952660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.952686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:28 crc kubenswrapper[4772]: I1122 10:38:28.952696 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:28Z","lastTransitionTime":"2025-11-22T10:38:28Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.055367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.055403 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.055412 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.055428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.055438 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.157980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.158029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.158063 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.158092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.158102 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.260076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.260111 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.260119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.260138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.260148 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.341804 4772 apiserver.go:52] "Watching apiserver" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.359744 4772 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.359948 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.360261 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.360331 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.360361 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.360382 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.360391 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.360752 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.360836 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.360835 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.360965 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.362014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.362069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.362077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.362095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.362105 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.363746 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.364332 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.364421 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.364481 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.364492 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.364493 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.364546 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.364898 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.365063 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.456646 4772 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458622 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458668 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458693 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458715 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458733 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458752 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458769 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458786 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458804 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458822 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458840 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458860 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458877 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458898 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458924 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458943 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458960 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458980 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.458994 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459024 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459081 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459152 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459167 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459183 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459167 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459216 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459349 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459378 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459391 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459410 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459446 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459538 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459529 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459573 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459597 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460405 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459807 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459823 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459839 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.459991 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460122 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460355 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460500 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460522 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460539 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460792 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460809 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461075 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461102 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461117 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461132 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461146 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461160 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461175 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461191 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461206 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461220 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461235 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461250 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461263 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461277 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461292 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461307 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461322 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461337 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460749 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461366 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.460974 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461016 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461150 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461255 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461404 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461348 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461385 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461506 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461535 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461560 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461586 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461609 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461535 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461633 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461653 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461676 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461697 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461719 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461741 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461763 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461784 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461809 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461833 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461863 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461887 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461911 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461935 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461956 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461977 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462007 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462029 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462064 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462087 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462106 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462129 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462152 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462175 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462195 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462215 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462243 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462265 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462285 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462308 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462329 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462373 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462395 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462416 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462436 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462458 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462483 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462504 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462525 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462546 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462568 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462588 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462608 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462632 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462654 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462675 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462770 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462793 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462813 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462833 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462856 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462882 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462903 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462926 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462945 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462965 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462987 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463008 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463029 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463066 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463090 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463110 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463133 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463154 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463198 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463218 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463238 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463258 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463278 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463299 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463320 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463341 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463362 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463406 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463429 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463471 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463492 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463515 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463537 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463557 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463578 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463600 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463621 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463641 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463663 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463685 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463705 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463726 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463746 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463767 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463790 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463811 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463832 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463851 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463873 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463894 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463915 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463937 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463967 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463988 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464008 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464030 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464068 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464093 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464114 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464134 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464159 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464181 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464203 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464229 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464250 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464271 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464293 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464315 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464337 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464357 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464379 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464400 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464421 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464443 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464467 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464489 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464513 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464538 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464559 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464579 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464600 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464623 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464645 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464667 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464690 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464713 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464737 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464758 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464781 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464812 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464866 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464889 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464913 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464937 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465008 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465067 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465094 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465119 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465140 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465164 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465188 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465213 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465239 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465262 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465285 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465313 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465334 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465397 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465411 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465424 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465438 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465451 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465464 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465476 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465489 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465502 4772 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465515 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465528 4772 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465541 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465555 4772 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465569 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465581 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465595 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465607 4772 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465621 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465633 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465646 4772 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465660 4772 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465671 4772 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465684 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465702 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465714 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465727 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465740 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465753 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466027 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461555 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466235 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461591 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461608 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461736 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461766 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461784 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461818 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.461997 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462105 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462115 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462227 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462258 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462328 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462433 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462528 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462592 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462677 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462713 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462788 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.462861 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463015 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463107 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463256 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463283 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463383 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466797 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463464 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463560 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463576 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463613 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.463759 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464030 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464460 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.464789 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465057 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465487 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466892 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465650 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465899 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.465946 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466117 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466214 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.466653 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467123 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467156 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467247 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467454 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467540 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467364 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467669 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467693 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467726 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.467816 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.468113 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.470457 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.470483 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.470490 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.470540 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.470609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.470862 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.470994 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471142 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471180 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471239 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.471344 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:38:29.971323625 +0000 UTC m=+30.210768119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471352 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471623 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.471994 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.472040 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.472346 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.472614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.474371 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.474384 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.474603 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.474706 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.474810 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.475073 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.477547 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.477562 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.477753 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.477837 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.477924 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478113 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478129 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478278 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478503 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478525 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478561 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478762 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478786 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478937 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478965 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.478989 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.479138 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.479212 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.479299 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.479387 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.479476 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.479550 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.479597 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:29.979587735 +0000 UTC m=+30.219032229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.479720 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.480014 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.480091 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.480211 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.480403 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.480808 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.480934 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.481170 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.481744 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.482001 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.482401 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.482596 4772 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.482644 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.483128 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.483144 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.518696 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.518683 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.518685 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519114 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519165 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.519279 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.519363 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:30.019343502 +0000 UTC m=+30.258787986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519452 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519542 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519680 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519702 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519763 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519860 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.519977 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520002 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520085 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520097 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520380 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520387 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520437 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520690 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.520966 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.521062 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.521315 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.521472 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.521813 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.521991 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.524872 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.525104 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.525438 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.525498 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.526014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.526223 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.526248 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.526403 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.526768 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.527024 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.527451 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.527913 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.528875 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.529378 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.529396 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.529658 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.529762 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.530089 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.530123 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.531845 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.533386 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.534371 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.534964 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.536901 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.538016 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.538242 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.538354 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.538417 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.538540 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.538563 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.538577 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.538604 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.538650 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:30.038617205 +0000 UTC m=+30.278061699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.538734 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.538852 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.539369 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.539557 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.539975 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.544887 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.544909 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.544920 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.545005 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:30.044983534 +0000 UTC m=+30.284428028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.550475 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.550809 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.558121 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.561526 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.564749 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566368 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566399 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566458 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566470 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566496 4772 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566514 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566529 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566538 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566548 4772 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566556 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566564 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566572 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566580 4772 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566589 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566597 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566605 4772 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566613 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566621 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566629 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566655 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566668 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566736 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566749 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566765 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566781 4772 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566796 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566813 4772 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566826 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566836 4772 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566847 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566857 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566865 4772 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.566875 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567020 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567033 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567069 4772 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567082 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567093 4772 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567104 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567116 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567127 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567139 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567150 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567164 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567176 4772 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567192 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567204 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567216 4772 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567227 4772 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567239 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567249 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567261 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567273 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567284 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567298 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567310 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567325 4772 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567336 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567348 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567360 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567370 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567383 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567393 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567404 4772 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567416 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567428 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567437 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567447 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567457 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567499 4772 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567512 4772 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567524 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567546 4772 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567557 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567569 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567580 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567593 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567607 4772 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567618 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567629 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567641 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567653 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567666 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567678 4772 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567689 4772 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567702 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567719 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567732 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567743 4772 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567755 4772 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567796 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567809 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567821 4772 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567833 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567845 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567857 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567869 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567882 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567893 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567907 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567919 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567930 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567942 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567953 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567964 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567978 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.567990 4772 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568002 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568016 4772 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568027 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568039 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568075 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568088 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568100 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568111 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568123 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568134 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568146 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568157 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568165 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568176 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568187 4772 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568198 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568210 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568222 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568233 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568245 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568254 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568264 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568272 4772 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568284 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568296 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568307 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568319 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568331 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568345 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568357 4772 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568370 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568381 4772 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568392 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568402 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568411 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568420 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568428 4772 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568436 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568445 4772 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568454 4772 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568462 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568471 4772 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568487 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568496 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568504 4772 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568513 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568522 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568532 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568541 4772 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568550 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568558 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568566 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568575 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568583 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568591 4772 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568599 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568607 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568616 4772 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568624 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568633 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568641 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568649 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.568657 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.575321 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.575371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.575385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.575403 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.575413 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.575460 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.585209 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.596196 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.639315 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.640558 4772 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.640628 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.643716 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.650973 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.660838 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.674889 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.677422 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.677493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.677526 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.677538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.677555 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.677568 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.683689 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.685932 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.690715 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 10:38:29 crc kubenswrapper[4772]: W1122 10:38:29.693652 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-9dcd23729410e25089583ac7e14b524dc9e104ef2e8c48fa0f79a016fbebe370 WatchSource:0}: Error finding container 9dcd23729410e25089583ac7e14b524dc9e104ef2e8c48fa0f79a016fbebe370: Status 404 returned error can't find the container with id 9dcd23729410e25089583ac7e14b524dc9e104ef2e8c48fa0f79a016fbebe370 Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.697086 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.707324 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.716455 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.726821 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.736506 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.748746 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.758625 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.769752 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.779399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.779429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.779439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.779455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.779466 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.797593 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-22 10:33:28 +0000 UTC, rotation deadline is 2026-10-05 03:39:50.750849329 +0000 UTC Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.797675 4772 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7601h1m20.953176643s for next certificate rotation Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.830791 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9qv88"] Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.831119 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.832640 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.833736 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.834088 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.836526 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.847646 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.858846 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.870367 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.870719 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/751566de-1913-4dd6-9054-21febc661c27-hosts-file\") pod \"node-resolver-9qv88\" (UID: \"751566de-1913-4dd6-9054-21febc661c27\") " pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.870791 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jckg9\" (UniqueName: \"kubernetes.io/projected/751566de-1913-4dd6-9054-21febc661c27-kube-api-access-jckg9\") pod \"node-resolver-9qv88\" (UID: \"751566de-1913-4dd6-9054-21febc661c27\") " pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.884774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.884819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.884829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.884851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.884865 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.887475 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.898953 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.914323 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.925889 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.972138 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.972259 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/751566de-1913-4dd6-9054-21febc661c27-hosts-file\") pod \"node-resolver-9qv88\" (UID: \"751566de-1913-4dd6-9054-21febc661c27\") " pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:29 crc kubenswrapper[4772]: E1122 10:38:29.972282 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:38:30.972259311 +0000 UTC m=+31.211703795 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.972305 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jckg9\" (UniqueName: \"kubernetes.io/projected/751566de-1913-4dd6-9054-21febc661c27-kube-api-access-jckg9\") pod \"node-resolver-9qv88\" (UID: \"751566de-1913-4dd6-9054-21febc661c27\") " pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.972328 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/751566de-1913-4dd6-9054-21febc661c27-hosts-file\") pod \"node-resolver-9qv88\" (UID: \"751566de-1913-4dd6-9054-21febc661c27\") " pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.990968 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jckg9\" (UniqueName: \"kubernetes.io/projected/751566de-1913-4dd6-9054-21febc661c27-kube-api-access-jckg9\") pod \"node-resolver-9qv88\" (UID: \"751566de-1913-4dd6-9054-21febc661c27\") " pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.995322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.995367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.995379 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.995396 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:29 crc kubenswrapper[4772]: I1122 10:38:29.995408 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:29Z","lastTransitionTime":"2025-11-22T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.073637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.073689 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.073736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073780 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.073792 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073903 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:31.073831573 +0000 UTC m=+31.313276057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073906 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073939 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073945 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073959 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073965 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.073973 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.074031 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:31.074010978 +0000 UTC m=+31.313455472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.074072 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:31.074043039 +0000 UTC m=+31.313487533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.074101 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.074147 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:31.074129121 +0000 UTC m=+31.313573685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.098004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.098038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.098059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.098074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.098083 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.147791 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9qv88" Nov 22 10:38:30 crc kubenswrapper[4772]: W1122 10:38:30.158699 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751566de_1913_4dd6_9054_21febc661c27.slice/crio-4fed5fb1750562ac117b0469d08eae139b223608a631676ee4670f25ac453ff6 WatchSource:0}: Error finding container 4fed5fb1750562ac117b0469d08eae139b223608a631676ee4670f25ac453ff6: Status 404 returned error can't find the container with id 4fed5fb1750562ac117b0469d08eae139b223608a631676ee4670f25ac453ff6 Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.199961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.199992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.200016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.200029 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.200038 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.303003 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.303071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.303090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.303112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.303129 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.404822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.404881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.404891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.404907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.404934 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.413136 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.413253 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.508059 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.508100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.508107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.508122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.508131 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.560719 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.560772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4236c1eaba79b32642752f74995e95180f47342dcf1c9bda36cf56078e81187b"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.562788 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.563422 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.564995 4772 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216" exitCode=255 Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.565026 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.565098 4772 scope.go:117] "RemoveContainer" containerID="62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.565931 4772 scope.go:117] "RemoveContainer" containerID="da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216" Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.566445 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.566614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"19c772778285f78e2a258d4224421c5fa17ebb596050ddf1e3a0c215c2d618f0"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.567712 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9dcd23729410e25089583ac7e14b524dc9e104ef2e8c48fa0f79a016fbebe370"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.568676 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9qv88" event={"ID":"751566de-1913-4dd6-9054-21febc661c27","Type":"ContainerStarted","Data":"4fed5fb1750562ac117b0469d08eae139b223608a631676ee4670f25ac453ff6"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.575319 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.585143 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.595520 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.605987 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.610525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.610552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.610578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.610591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.610600 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.613920 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.628330 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:11Z\\\",\\\"message\\\":\\\"W1122 10:38:10.392307 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 10:38:10.392680 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763807890 cert, and key in /tmp/serving-cert-1429776251/serving-signer.crt, /tmp/serving-cert-1429776251/serving-signer.key\\\\nI1122 10:38:10.801564 1 observer_polling.go:159] Starting file observer\\\\nW1122 10:38:10.804471 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 10:38:10.804741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 10:38:10.933179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1429776251/tls.crt::/tmp/serving-cert-1429776251/tls.key\\\\\\\"\\\\nF1122 10:38:11.299563 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.637159 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.643761 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.713445 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.713476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.713486 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.713501 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.713512 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.816218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.816251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.816259 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.816274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.816283 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.919001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.919039 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.919067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.919086 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.919095 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:30Z","lastTransitionTime":"2025-11-22T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.944364 4772 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 22 10:38:30 crc kubenswrapper[4772]: I1122 10:38:30.980791 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:30 crc kubenswrapper[4772]: E1122 10:38:30.981066 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:38:32.981010918 +0000 UTC m=+33.220455432 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.024156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.024196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.024210 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.024228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.024239 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.081494 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.081561 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.081586 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.081605 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081773 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081783 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081810 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081813 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081855 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081872 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081879 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:33.081853801 +0000 UTC m=+33.321298295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081836 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081950 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:33.081927923 +0000 UTC m=+33.321372417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.081783 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.082055 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:33.082012135 +0000 UTC m=+33.321456829 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.082102 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:33.082085567 +0000 UTC m=+33.321530061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.126606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.126672 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.126690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.126716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.126737 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.206753 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-s4mvm"] Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.207182 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wwshd"] Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.207346 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.207461 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.209241 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z6xtb"] Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.209727 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.210541 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.210753 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.211385 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.212434 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.213394 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.213829 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.214282 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.214451 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.214519 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.214640 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.214754 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.219388 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.228791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.228826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.228834 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.228850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.228860 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.232150 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.244596 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.254176 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.266084 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.275073 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.282902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.282981 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-etc-kubernetes\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283013 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-multus-certs\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283103 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-k8s-cni-cncf-io\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283131 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-cni-multus\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283158 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-kubelet\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283180 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzd4\" (UniqueName: \"kubernetes.io/projected/2386c238-461f-4956-940f-ac3c26eb052e-kube-api-access-zvzd4\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283202 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-system-cni-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283223 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-daemon-config\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283302 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-system-cni-dir\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283353 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cnibin\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283393 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-os-release\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283443 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-hostroot\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283476 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-socket-dir-parent\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283494 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2386c238-461f-4956-940f-ac3c26eb052e-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283509 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-conf-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283526 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-os-release\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283542 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d73fd58d-561a-4b16-9f9d-49ae966edb24-cni-binary-copy\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283555 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-cni-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283574 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vq5\" (UniqueName: \"kubernetes.io/projected/e11c7f86-73db-4015-9fe5-c0b5047c19a0-kube-api-access-v9vq5\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283616 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-cni-bin\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283645 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vk9g\" (UniqueName: \"kubernetes.io/projected/d73fd58d-561a-4b16-9f9d-49ae966edb24-kube-api-access-7vk9g\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283678 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283703 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2386c238-461f-4956-940f-ac3c26eb052e-rootfs\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283724 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2386c238-461f-4956-940f-ac3c26eb052e-proxy-tls\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283745 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283768 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-cnibin\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.283788 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-netns\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.285872 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.298241 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:11Z\\\",\\\"message\\\":\\\"W1122 10:38:10.392307 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 10:38:10.392680 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763807890 cert, and key in /tmp/serving-cert-1429776251/serving-signer.crt, /tmp/serving-cert-1429776251/serving-signer.key\\\\nI1122 10:38:10.801564 1 observer_polling.go:159] Starting file observer\\\\nW1122 10:38:10.804471 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 10:38:10.804741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 10:38:10.933179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1429776251/tls.crt::/tmp/serving-cert-1429776251/tls.key\\\\\\\"\\\\nF1122 10:38:11.299563 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.309327 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.320900 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.330367 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.335145 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.335205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.335217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.335234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.335245 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.342409 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.352571 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.364068 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.372975 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384512 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-os-release\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384664 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d73fd58d-561a-4b16-9f9d-49ae966edb24-cni-binary-copy\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384698 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-os-release\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384748 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-conf-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384847 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-cni-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384876 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384932 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9vq5\" (UniqueName: \"kubernetes.io/projected/e11c7f86-73db-4015-9fe5-c0b5047c19a0-kube-api-access-v9vq5\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385041 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-conf-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385162 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-cni-bin\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385205 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-cni-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.384958 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-cni-bin\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385317 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vk9g\" (UniqueName: \"kubernetes.io/projected/d73fd58d-561a-4b16-9f9d-49ae966edb24-kube-api-access-7vk9g\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385383 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385421 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2386c238-461f-4956-940f-ac3c26eb052e-proxy-tls\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385446 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385470 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-cnibin\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-netns\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385494 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d73fd58d-561a-4b16-9f9d-49ae966edb24-cni-binary-copy\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385543 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2386c238-461f-4956-940f-ac3c26eb052e-rootfs\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385515 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2386c238-461f-4956-940f-ac3c26eb052e-rootfs\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385588 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385616 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-etc-kubernetes\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385651 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-multus-certs\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-k8s-cni-cncf-io\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-cni-multus\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385762 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-kubelet\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385789 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzd4\" (UniqueName: \"kubernetes.io/projected/2386c238-461f-4956-940f-ac3c26eb052e-kube-api-access-zvzd4\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385817 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-system-cni-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-daemon-config\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-etc-kubernetes\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385881 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cnibin\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385908 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cnibin\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385934 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-os-release\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385946 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-multus-certs\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385983 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-k8s-cni-cncf-io\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.385986 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-hostroot\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386013 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-hostroot\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386021 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-system-cni-dir\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386070 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-socket-dir-parent\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386098 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2386c238-461f-4956-940f-ac3c26eb052e-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386393 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-cnibin\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386391 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-run-netns\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386593 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-system-cni-dir\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386617 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-cni-multus\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386688 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-host-var-lib-kubelet\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-system-cni-dir\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386777 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386824 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-socket-dir-parent\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.386863 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-os-release\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.387004 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e11c7f86-73db-4015-9fe5-c0b5047c19a0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.387265 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e11c7f86-73db-4015-9fe5-c0b5047c19a0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.387700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2386c238-461f-4956-940f-ac3c26eb052e-mcd-auth-proxy-config\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.387785 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d73fd58d-561a-4b16-9f9d-49ae966edb24-multus-daemon-config\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.396765 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2386c238-461f-4956-940f-ac3c26eb052e-proxy-tls\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.398661 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.408836 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vk9g\" (UniqueName: \"kubernetes.io/projected/d73fd58d-561a-4b16-9f9d-49ae966edb24-kube-api-access-7vk9g\") pod \"multus-s4mvm\" (UID: \"d73fd58d-561a-4b16-9f9d-49ae966edb24\") " pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.409791 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzd4\" (UniqueName: \"kubernetes.io/projected/2386c238-461f-4956-940f-ac3c26eb052e-kube-api-access-zvzd4\") pod \"machine-config-daemon-wwshd\" (UID: \"2386c238-461f-4956-940f-ac3c26eb052e\") " pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.413023 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.413187 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.413593 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.413667 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.415389 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9vq5\" (UniqueName: \"kubernetes.io/projected/e11c7f86-73db-4015-9fe5-c0b5047c19a0-kube-api-access-v9vq5\") pod \"multus-additional-cni-plugins-z6xtb\" (UID: \"e11c7f86-73db-4015-9fe5-c0b5047c19a0\") " pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.415742 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.417928 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.418668 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.420021 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.420873 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.421905 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.422520 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.423266 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.424240 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.424937 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.425852 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.426368 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.427847 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.428369 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.428926 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:11Z\\\",\\\"message\\\":\\\"W1122 10:38:10.392307 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 10:38:10.392680 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763807890 cert, and key in /tmp/serving-cert-1429776251/serving-signer.crt, /tmp/serving-cert-1429776251/serving-signer.key\\\\nI1122 10:38:10.801564 1 observer_polling.go:159] Starting file observer\\\\nW1122 10:38:10.804471 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 10:38:10.804741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 10:38:10.933179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1429776251/tls.crt::/tmp/serving-cert-1429776251/tls.key\\\\\\\"\\\\nF1122 10:38:11.299563 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.429491 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.430388 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.430874 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.432716 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.433504 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.434238 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.434947 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.436843 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.437752 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.437973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.438010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.438023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.438038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.438064 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.439111 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.439929 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.440959 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.441243 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.441824 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.442980 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.443628 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.444373 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.445479 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.446112 4772 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.446301 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.448710 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.449336 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.449921 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.452115 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.452895 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.454087 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.454908 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.456076 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.456271 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.456798 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.457568 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.458703 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.459803 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.460447 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.461520 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.462202 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.463390 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.464016 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.465021 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.465323 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.465857 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.466531 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.467872 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.468527 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.474326 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.483252 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.492374 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.501617 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.510816 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.520235 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s4mvm" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.522949 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62316e945792f6352882cba066715b7939d71a05090dd96b2285983b929c0b59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:11Z\\\",\\\"message\\\":\\\"W1122 10:38:10.392307 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 10:38:10.392680 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763807890 cert, and key in /tmp/serving-cert-1429776251/serving-signer.crt, /tmp/serving-cert-1429776251/serving-signer.key\\\\nI1122 10:38:10.801564 1 observer_polling.go:159] Starting file observer\\\\nW1122 10:38:10.804471 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 10:38:10.804741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 10:38:10.933179 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1429776251/tls.crt::/tmp/serving-cert-1429776251/tls.key\\\\\\\"\\\\nF1122 10:38:11.299563 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.532158 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.539222 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.542013 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.542452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.542468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.542485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.542498 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.543147 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: W1122 10:38:31.546948 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-f3499df0ccf4b184c83b9c71a7cb92927da2f132ba69ea87cdc71c864c22acbd WatchSource:0}: Error finding container f3499df0ccf4b184c83b9c71a7cb92927da2f132ba69ea87cdc71c864c22acbd: Status 404 returned error can't find the container with id f3499df0ccf4b184c83b9c71a7cb92927da2f132ba69ea87cdc71c864c22acbd Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.557539 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.572729 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"f3499df0ccf4b184c83b9c71a7cb92927da2f132ba69ea87cdc71c864c22acbd"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.574531 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.578354 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.578410 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.584063 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerStarted","Data":"12e9db5bf0667d4429dad2075d25d480f1c952efbbfd26dba49fca86a2165194"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.585999 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.591834 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.595931 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mfm49"] Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.596977 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.601733 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.601743 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.602301 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.602489 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.602583 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.602739 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.602956 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.605867 4772 scope.go:117] "RemoveContainer" containerID="da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216" Nov 22 10:38:31 crc kubenswrapper[4772]: E1122 10:38:31.606032 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.607154 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.607522 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9qv88" event={"ID":"751566de-1913-4dd6-9054-21febc661c27","Type":"ContainerStarted","Data":"902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.609932 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerStarted","Data":"733a6592d9f62818feaf6a414541a10b6feceb2a06f0d426ed86340e54d6444b"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.621453 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.634621 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.657154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.657322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.657332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.657348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.657357 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.664166 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689486 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-env-overrides\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689555 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-systemd-units\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689571 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-var-lib-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689588 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689629 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-log-socket\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689642 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-systemd\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689719 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovn-node-metrics-cert\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689746 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-slash\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689760 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-kubelet\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689793 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-netd\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689844 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-etc-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689864 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-config\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689879 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-ovn\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689913 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkcfw\" (UniqueName: \"kubernetes.io/projected/fd84e05e-cfd6-46d5-bd23-30689addcd8b-kube-api-access-nkcfw\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689941 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-bin\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689955 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-script-lib\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.689974 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-netns\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.690013 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-node-log\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.717471 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.748164 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.758964 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.759531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.759560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.759572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.759589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.759599 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.769085 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.777540 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.787261 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790787 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-netd\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790831 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-etc-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790847 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-config\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790869 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-ovn\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790890 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790906 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkcfw\" (UniqueName: \"kubernetes.io/projected/fd84e05e-cfd6-46d5-bd23-30689addcd8b-kube-api-access-nkcfw\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790921 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-bin\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790942 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-script-lib\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790982 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-ovn\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791032 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-netns\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790988 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-netns\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.790997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-netd\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791091 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-node-log\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791143 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-bin\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791166 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-node-log\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-systemd-units\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-systemd-units\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791283 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-env-overrides\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-var-lib-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.791439 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794308 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794157 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-var-lib-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794351 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-log-socket\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794413 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-systemd\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794435 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794480 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovn-node-metrics-cert\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794513 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-slash\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794556 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-kubelet\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794636 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-kubelet\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-config\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794793 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-env-overrides\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794842 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-etc-openvswitch\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794896 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794947 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-log-socket\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.794980 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-systemd\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.795097 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-script-lib\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.795179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-slash\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.800102 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovn-node-metrics-cert\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.800381 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.809032 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.811778 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkcfw\" (UniqueName: \"kubernetes.io/projected/fd84e05e-cfd6-46d5-bd23-30689addcd8b-kube-api-access-nkcfw\") pod \"ovnkube-node-mfm49\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.821519 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.830311 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.842308 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.855958 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.862489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.862523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.862531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.862546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.862555 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.869231 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.880578 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.892653 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.902858 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.915496 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.926327 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.934540 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.941560 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.942941 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: W1122 10:38:31.952123 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd84e05e_cfd6_46d5_bd23_30689addcd8b.slice/crio-7ecacca8dfa050f7f5ce4ca482cc4296e7f468f52bf670580b43811abdaada36 WatchSource:0}: Error finding container 7ecacca8dfa050f7f5ce4ca482cc4296e7f468f52bf670580b43811abdaada36: Status 404 returned error can't find the container with id 7ecacca8dfa050f7f5ce4ca482cc4296e7f468f52bf670580b43811abdaada36 Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.961519 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.965161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.965187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.965195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.965207 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:31 crc kubenswrapper[4772]: I1122 10:38:31.965217 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:31Z","lastTransitionTime":"2025-11-22T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.067941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.068007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.068024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.068078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.068096 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.171393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.171469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.171487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.171559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.171584 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.275287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.275341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.275354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.275374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.275388 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.378682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.378739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.378751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.378772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.378784 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.413336 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:32 crc kubenswrapper[4772]: E1122 10:38:32.413499 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.481616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.481661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.481676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.481695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.481711 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.584510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.584592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.584609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.584632 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.584646 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.614188 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"7ecacca8dfa050f7f5ce4ca482cc4296e7f468f52bf670580b43811abdaada36"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.615991 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.687628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.687775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.687872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.688153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.688250 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.790975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.791039 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.791069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.791097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.791112 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.894516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.894593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.894614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.894646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.894675 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.997727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.997808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.997828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.997858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:32 crc kubenswrapper[4772]: I1122 10:38:32.997878 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:32Z","lastTransitionTime":"2025-11-22T10:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.009183 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.009520 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:38:37.009472454 +0000 UTC m=+37.248916958 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.100522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.100587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.100600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.100620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.100632 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.111106 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.111158 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.111182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.111205 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111327 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111356 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111378 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111390 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111410 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:37.111385945 +0000 UTC m=+37.350830449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111332 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111447 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:37.111431066 +0000 UTC m=+37.350875560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111467 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:37.111459337 +0000 UTC m=+37.350904041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111654 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111689 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111704 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.111770 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:37.111747185 +0000 UTC m=+37.351191679 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.203246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.203277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.203286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.203301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.203310 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.305599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.305638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.305648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.305664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.305676 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.381093 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-mbpk7"] Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.381586 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.383137 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.383910 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.384103 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.384626 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.390851 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.397566 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.405803 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.407414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.407447 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.407457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.407470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.407478 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.412436 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.412480 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.412592 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:33 crc kubenswrapper[4772]: E1122 10:38:33.412714 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.414017 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e39748b-4fa5-4a70-8921-dc3dc814f124-host\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.414073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2e39748b-4fa5-4a70-8921-dc3dc814f124-serviceca\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.414113 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5ss\" (UniqueName: \"kubernetes.io/projected/2e39748b-4fa5-4a70-8921-dc3dc814f124-kube-api-access-6m5ss\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.420603 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.430407 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.439327 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.449973 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.457802 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.466899 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.476460 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.484015 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.494119 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.502898 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.510130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.510169 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.510181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.510202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.510215 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.515189 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e39748b-4fa5-4a70-8921-dc3dc814f124-host\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.515235 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2e39748b-4fa5-4a70-8921-dc3dc814f124-serviceca\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.515257 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m5ss\" (UniqueName: \"kubernetes.io/projected/2e39748b-4fa5-4a70-8921-dc3dc814f124-kube-api-access-6m5ss\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.515324 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e39748b-4fa5-4a70-8921-dc3dc814f124-host\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.516253 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2e39748b-4fa5-4a70-8921-dc3dc814f124-serviceca\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.534527 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m5ss\" (UniqueName: \"kubernetes.io/projected/2e39748b-4fa5-4a70-8921-dc3dc814f124-kube-api-access-6m5ss\") pod \"node-ca-mbpk7\" (UID: \"2e39748b-4fa5-4a70-8921-dc3dc814f124\") " pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.612319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.612367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.612377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.612395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.612406 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.619397 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235" exitCode=0 Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.619468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.620759 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerStarted","Data":"11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.622925 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.624283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.625573 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerStarted","Data":"4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.628413 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.638548 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.650925 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.664913 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.675174 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.685178 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.693339 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mbpk7" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.694842 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: W1122 10:38:33.704664 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e39748b_4fa5_4a70_8921_dc3dc814f124.slice/crio-5d920c63c08ce2ea9a52876c596ff8983dfd6ce4e8959e96a6c4066ec18e7e54 WatchSource:0}: Error finding container 5d920c63c08ce2ea9a52876c596ff8983dfd6ce4e8959e96a6c4066ec18e7e54: Status 404 returned error can't find the container with id 5d920c63c08ce2ea9a52876c596ff8983dfd6ce4e8959e96a6c4066ec18e7e54 Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.707432 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.713987 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.714070 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.714088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.714112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.714129 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.718897 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.733779 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.748155 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.763951 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.771378 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.780396 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.795905 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.815719 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.819776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.819822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.819841 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.819859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.819870 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.842338 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.852539 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.863689 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.873592 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.882503 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.891082 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.896987 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.909101 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.920492 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.922177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.922212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.922223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.922242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.922254 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:33Z","lastTransitionTime":"2025-11-22T10:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:33 crc kubenswrapper[4772]: I1122 10:38:33.932725 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.024914 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.024951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.024960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.024976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.024986 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.127734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.127768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.127776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.127790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.127801 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.229925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.229981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.230002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.230026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.230071 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.332091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.332142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.332158 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.332178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.332195 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.412822 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:34 crc kubenswrapper[4772]: E1122 10:38:34.412941 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.434472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.434521 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.434537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.434557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.434574 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.536892 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.536941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.536957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.536980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.536996 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.630907 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.633089 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mbpk7" event={"ID":"2e39748b-4fa5-4a70-8921-dc3dc814f124","Type":"ContainerStarted","Data":"c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.633136 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mbpk7" event={"ID":"2e39748b-4fa5-4a70-8921-dc3dc814f124","Type":"ContainerStarted","Data":"5d920c63c08ce2ea9a52876c596ff8983dfd6ce4e8959e96a6c4066ec18e7e54"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.639777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.639809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.639820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.639837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.639849 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.652326 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.676503 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.690487 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.710354 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.729548 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.742152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.742209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.742229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.742257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.742276 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.747377 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.764978 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.779502 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.792131 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.804843 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.820937 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.838009 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.844581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.844623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.844632 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.844649 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.844663 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.852379 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.865924 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.877936 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.898445 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.912814 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.927296 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.938897 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.947705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.947752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.947768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.947792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.947806 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:34Z","lastTransitionTime":"2025-11-22T10:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.952239 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.970283 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:34 crc kubenswrapper[4772]: I1122 10:38:34.989145 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:34Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.003375 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.014953 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.028691 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.045789 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.052633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.052688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.052700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.052718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.052733 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.145462 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.152930 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.155623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.155684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.155749 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.155788 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.155815 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.158512 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.170872 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.193018 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.209446 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.227459 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.234815 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.235460 4772 scope.go:117] "RemoveContainer" containerID="da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216" Nov 22 10:38:35 crc kubenswrapper[4772]: E1122 10:38:35.235610 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.240105 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.255149 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.258656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.258696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.258712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.258734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.258749 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.274144 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.289440 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.305602 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.319776 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.338546 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.351601 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.361341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.361382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.361392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.361408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.361417 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.368127 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.380187 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.392585 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.406869 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.413409 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:35 crc kubenswrapper[4772]: E1122 10:38:35.413577 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.413977 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:35 crc kubenswrapper[4772]: E1122 10:38:35.414162 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.426402 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.441797 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.461553 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.463637 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.463747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.463813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.463881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.463965 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.477110 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.491852 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.520328 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.564626 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.566178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.566235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.566247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.566268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.566282 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.598675 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.635876 4772 generic.go:334] "Generic (PLEG): container finished" podID="e11c7f86-73db-4015-9fe5-c0b5047c19a0" containerID="11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f" exitCode=0 Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.635949 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerDied","Data":"11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.645151 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.647285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.647391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.647402 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.668948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.668979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.668988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.669002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.669011 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.675766 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.720810 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.760157 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.774539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.774575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.774584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.774599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.774608 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.799520 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.839503 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.876248 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.876281 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.876288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.876302 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.876311 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.886543 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.915730 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.957986 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.978872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.978919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.978929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.978944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.978954 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:35Z","lastTransitionTime":"2025-11-22T10:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:35 crc kubenswrapper[4772]: I1122 10:38:35.997254 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.041629 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.079853 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.082332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.082375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.082384 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.082400 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.082410 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.118817 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.160011 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.184284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.184325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.184336 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.184352 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.184365 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.197816 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.239219 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.287628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.287708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.287727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.287758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.287782 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.293726 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.390375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.390416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.390425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.390439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.390451 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.412831 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:36 crc kubenswrapper[4772]: E1122 10:38:36.412982 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.493255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.493293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.493302 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.493316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.493327 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.595182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.595216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.595224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.595243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.595254 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.655722 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerStarted","Data":"533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.658663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.658709 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.697505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.697544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.697553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.697569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.697579 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.800614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.800656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.800667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.800681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.800692 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.903893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.904347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.904373 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.904404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:36 crc kubenswrapper[4772]: I1122 10:38:36.904426 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:36Z","lastTransitionTime":"2025-11-22T10:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.006365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.006423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.006435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.006452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.006463 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.054002 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.054121 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:38:45.054098437 +0000 UTC m=+45.293542931 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.109466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.109507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.109516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.109530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.109546 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.154936 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.154983 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.155000 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.155027 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155147 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155162 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155196 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:45.155179806 +0000 UTC m=+45.394624300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155249 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:45.155228917 +0000 UTC m=+45.394673411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155178 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155279 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155292 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155324 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:45.155317669 +0000 UTC m=+45.394762163 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155473 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155548 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155577 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.155711 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:45.155674819 +0000 UTC m=+45.395119453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.212124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.212208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.212229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.212258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.212278 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.315533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.315574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.315583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.315600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.315611 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.413368 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.413491 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.413658 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:37 crc kubenswrapper[4772]: E1122 10:38:37.413876 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.423025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.423098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.423113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.423134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.423151 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.526323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.526574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.526678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.526785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.526879 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.629271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.629323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.629336 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.629353 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.629369 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.705039 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.730624 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.731925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.732006 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.732076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.732157 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.732214 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.752511 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.769232 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.780331 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.794462 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.808040 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.820858 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.834535 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.834625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.834659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.834671 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.834690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.834705 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.852341 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.866796 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.880163 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.897436 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.915864 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.937309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.937363 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.937377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.937399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:37 crc kubenswrapper[4772]: I1122 10:38:37.937414 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:37Z","lastTransitionTime":"2025-11-22T10:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.041187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.041243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.041256 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.041279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.041294 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.144509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.144573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.144588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.144613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.144633 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.247815 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.247901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.247922 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.247955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.247979 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.350622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.350695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.350707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.350732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.350750 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.412690 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:38 crc kubenswrapper[4772]: E1122 10:38:38.412925 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.453774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.453907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.453925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.453951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.453996 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.556987 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.557084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.557105 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.557135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.557151 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.660439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.660504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.660517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.660544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.660558 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.763853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.763918 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.763932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.763957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.763972 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.867316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.867738 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.867771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.867794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.867807 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.957313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.957546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.957554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.957572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.957583 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: E1122 10:38:38.975724 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.980508 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.980551 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.980563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.980588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:38 crc kubenswrapper[4772]: I1122 10:38:38.980603 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:38Z","lastTransitionTime":"2025-11-22T10:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:38 crc kubenswrapper[4772]: E1122 10:38:38.997111 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.001103 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.001168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.001183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.001208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.001227 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: E1122 10:38:39.015663 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:39Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.020227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.020277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.020289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.020310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.020325 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: E1122 10:38:39.033225 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:39Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.036945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.036986 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.036996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.037014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.037025 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: E1122 10:38:39.048947 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:39Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:39 crc kubenswrapper[4772]: E1122 10:38:39.049139 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.051293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.051345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.051357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.051388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.051405 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.155133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.155192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.155205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.155226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.155241 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.258923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.259008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.259023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.259066 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.259084 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.361992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.362058 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.362071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.362089 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.362102 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.412936 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.412936 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:39 crc kubenswrapper[4772]: E1122 10:38:39.413077 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:39 crc kubenswrapper[4772]: E1122 10:38:39.413207 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.465334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.465384 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.465398 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.465426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.465448 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.570390 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.570456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.570475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.570505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.570527 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.674722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.674776 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.674788 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.674807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.674825 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.778463 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.778507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.778519 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.778541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.778554 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.882110 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.882152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.882167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.882187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.882203 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.985949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.985994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.986008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.986031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:39 crc kubenswrapper[4772]: I1122 10:38:39.986066 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:39Z","lastTransitionTime":"2025-11-22T10:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.088986 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.089078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.089142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.089178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.089208 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.193558 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.194023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.194040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.194083 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.194098 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.297690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.297742 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.297751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.297769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.297779 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.400217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.400284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.400293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.400310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.400321 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.413114 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:40 crc kubenswrapper[4772]: E1122 10:38:40.413262 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.503929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.503994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.504010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.504061 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.504082 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.606319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.606359 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.606367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.606380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.606390 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.674870 4772 generic.go:334] "Generic (PLEG): container finished" podID="e11c7f86-73db-4015-9fe5-c0b5047c19a0" containerID="533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03" exitCode=0 Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.674976 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerDied","Data":"533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.683002 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.693147 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.708295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.708348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.708356 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.708371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.708400 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.714292 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.734131 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.748880 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.768298 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.786900 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.804974 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.810849 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.810906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.810925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.810948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.810963 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.820909 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.835067 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.853200 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.866172 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.882329 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.899630 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.914749 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:40Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.914876 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.914911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.914921 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.914936 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:40 crc kubenswrapper[4772]: I1122 10:38:40.914946 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:40Z","lastTransitionTime":"2025-11-22T10:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.017077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.017121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.017130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.017143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.017154 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.120036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.120108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.120121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.120146 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.120168 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.222808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.222837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.222844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.222859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.222869 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.332298 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.332355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.332373 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.332401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.332418 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.413274 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:41 crc kubenswrapper[4772]: E1122 10:38:41.413553 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.414815 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:41 crc kubenswrapper[4772]: E1122 10:38:41.415115 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.434544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.434581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.434590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.434607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.434617 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.442645 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.455788 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.472326 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.489967 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.508660 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.523619 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.537698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.537758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.537777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.537802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.537819 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.539200 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.551652 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.574749 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.591038 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.614342 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.627714 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.641164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.641212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.641242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.641261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.641273 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.648158 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.663186 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.688210 4772 generic.go:334] "Generic (PLEG): container finished" podID="e11c7f86-73db-4015-9fe5-c0b5047c19a0" containerID="c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931" exitCode=0 Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.688290 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerDied","Data":"c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.706414 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.722810 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.745375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.745417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.745429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.745451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.745465 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.749308 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.761358 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.776856 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.793620 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.810736 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.825456 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.840369 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.850131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.850181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.850192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.850212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.850227 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.853164 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.869205 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.886032 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.901874 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.912720 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.953345 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.953391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.953407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.953428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:41 crc kubenswrapper[4772]: I1122 10:38:41.953442 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:41Z","lastTransitionTime":"2025-11-22T10:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.056247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.056300 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.056313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.056334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.056353 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.159260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.159310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.159320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.159338 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.159353 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.262341 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.262389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.262410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.262434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.262446 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.369136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.369287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.369300 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.369318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.369330 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.412857 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:42 crc kubenswrapper[4772]: E1122 10:38:42.413024 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.471890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.471930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.471939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.471956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.471968 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.579247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.579291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.579300 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.579319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.579329 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.681775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.682189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.682202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.682218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.682230 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.694339 4772 generic.go:334] "Generic (PLEG): container finished" podID="e11c7f86-73db-4015-9fe5-c0b5047c19a0" containerID="8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7" exitCode=0 Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.694419 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerDied","Data":"8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.699446 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.699729 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.699749 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.711566 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.734173 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.735019 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.755193 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.767135 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.779211 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.785609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.785656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.785669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.785692 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.785704 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.791995 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.804112 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.821039 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.836250 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.854750 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.870758 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.888102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.888149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.888161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.888181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.888196 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.893330 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.906482 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.921663 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.933287 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.945653 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.959270 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.985165 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.997543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.997581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.997613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.997630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.997642 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:42Z","lastTransitionTime":"2025-11-22T10:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:42 crc kubenswrapper[4772]: I1122 10:38:42.998888 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:42Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.014010 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.027453 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.039886 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.054037 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.071825 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.091419 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.102588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.102613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.102621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.102753 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.102767 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.103263 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.120599 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.136016 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.206599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.206908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.206989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.207090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.207201 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.310183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.310281 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.310301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.310336 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.310358 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.412562 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:43 crc kubenswrapper[4772]: E1122 10:38:43.412733 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.413227 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:43 crc kubenswrapper[4772]: E1122 10:38:43.413321 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.418277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.418335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.418353 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.418376 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.418420 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.446000 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx"] Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.447075 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.463611 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.464174 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.483320 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.515763 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.528035 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gmk\" (UniqueName: \"kubernetes.io/projected/dd0565ed-eb43-43a3-974c-45a23e9615a9-kube-api-access-98gmk\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.528092 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd0565ed-eb43-43a3-974c-45a23e9615a9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.528128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd0565ed-eb43-43a3-974c-45a23e9615a9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.528151 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd0565ed-eb43-43a3-974c-45a23e9615a9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.531533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.531593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.531606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.531626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.531635 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.535152 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.548087 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.568810 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.582033 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.597341 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.612145 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.624982 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.628527 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd0565ed-eb43-43a3-974c-45a23e9615a9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.628592 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98gmk\" (UniqueName: \"kubernetes.io/projected/dd0565ed-eb43-43a3-974c-45a23e9615a9-kube-api-access-98gmk\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.628616 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd0565ed-eb43-43a3-974c-45a23e9615a9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.628643 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd0565ed-eb43-43a3-974c-45a23e9615a9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.629321 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd0565ed-eb43-43a3-974c-45a23e9615a9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.629444 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd0565ed-eb43-43a3-974c-45a23e9615a9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.633641 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd0565ed-eb43-43a3-974c-45a23e9615a9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.634439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.634471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.634480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.634495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.634505 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.642805 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.647287 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98gmk\" (UniqueName: \"kubernetes.io/projected/dd0565ed-eb43-43a3-974c-45a23e9615a9-kube-api-access-98gmk\") pod \"ovnkube-control-plane-749d76644c-n8qfx\" (UID: \"dd0565ed-eb43-43a3-974c-45a23e9615a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.656647 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.673728 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.688103 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.701072 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.705313 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerStarted","Data":"cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.705916 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.723932 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.737879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.737932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.737945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.737968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.737980 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.741848 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.761384 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.763513 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.769680 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.776941 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: W1122 10:38:43.781966 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0565ed_eb43_43a3_974c_45a23e9615a9.slice/crio-78eccac046e33270f9e8ca3bbe49e9154df1253eef5a3535283b2686af3ad8c4 WatchSource:0}: Error finding container 78eccac046e33270f9e8ca3bbe49e9154df1253eef5a3535283b2686af3ad8c4: Status 404 returned error can't find the container with id 78eccac046e33270f9e8ca3bbe49e9154df1253eef5a3535283b2686af3ad8c4 Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.790216 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.806501 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.821483 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.834600 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.841410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.841465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.841481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.841506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.841525 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.854538 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.866799 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.885331 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.900181 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.921356 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.938856 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.944880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.944930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.944942 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.951670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.951714 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:43Z","lastTransitionTime":"2025-11-22T10:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.951975 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.967243 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:43 crc kubenswrapper[4772]: I1122 10:38:43.990295 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:43Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.013590 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.023947 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.035672 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.050814 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.055143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.055172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.055182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.055199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.055211 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.065121 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.076010 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.087143 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.096019 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.106156 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.115786 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.129451 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.141135 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.151231 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.157622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.157666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.157678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.157694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.157707 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.161767 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.259968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.260003 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.260012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.260027 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.260037 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.367959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.368010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.368023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.368055 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.368069 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.413490 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:44 crc kubenswrapper[4772]: E1122 10:38:44.413657 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.470153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.470204 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.470213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.470227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.470236 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.573292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.573589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.573655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.573717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.573775 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.580951 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-fvsrl"] Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.581399 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:44 crc kubenswrapper[4772]: E1122 10:38:44.581468 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.595950 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.610013 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.619737 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.630510 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.639031 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.639237 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76mr\" (UniqueName: \"kubernetes.io/projected/c89edce7-fac8-4954-b2e9-420f0f2de6a8-kube-api-access-x76mr\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.640495 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.654542 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.672158 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.675874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.675912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.675920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.675933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.675943 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.684016 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.694758 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.710590 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.711659 4772 generic.go:334] "Generic (PLEG): container finished" podID="e11c7f86-73db-4015-9fe5-c0b5047c19a0" containerID="cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2" exitCode=0 Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.711884 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerDied","Data":"cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.715030 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" event={"ID":"dd0565ed-eb43-43a3-974c-45a23e9615a9","Type":"ContainerStarted","Data":"a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.715101 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" event={"ID":"dd0565ed-eb43-43a3-974c-45a23e9615a9","Type":"ContainerStarted","Data":"73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.715118 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" event={"ID":"dd0565ed-eb43-43a3-974c-45a23e9615a9","Type":"ContainerStarted","Data":"78eccac046e33270f9e8ca3bbe49e9154df1253eef5a3535283b2686af3ad8c4"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.731288 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.740656 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x76mr\" (UniqueName: \"kubernetes.io/projected/c89edce7-fac8-4954-b2e9-420f0f2de6a8-kube-api-access-x76mr\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.740738 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:44 crc kubenswrapper[4772]: E1122 10:38:44.740878 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:44 crc kubenswrapper[4772]: E1122 10:38:44.740945 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:45.240928568 +0000 UTC m=+45.480373062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.750020 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.759951 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x76mr\" (UniqueName: \"kubernetes.io/projected/c89edce7-fac8-4954-b2e9-420f0f2de6a8-kube-api-access-x76mr\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.765104 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.781357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.781385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.781394 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.781407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.781415 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.785032 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.799591 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.813661 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.826617 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.845927 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.856874 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.869502 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.881581 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.883131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.883169 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.883179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.883194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.883203 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.890986 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.900847 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.910903 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.923159 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.934585 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.945191 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.965479 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.979715 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.985175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.985204 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.985211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.985224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.985235 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:44Z","lastTransitionTime":"2025-11-22T10:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:44 crc kubenswrapper[4772]: I1122 10:38:44.990942 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:44Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.002325 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:45Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.013173 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:45Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.087431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.087458 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.087466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.087478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.087486 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.145547 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.145765 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:39:01.145745998 +0000 UTC m=+61.385190492 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.190469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.190510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.190520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.190537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.190550 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.246690 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.246732 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.246756 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.246774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.246798 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.246910 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.246926 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.246936 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.246981 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:01.24696594 +0000 UTC m=+61.486410434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247015 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247033 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:01.247027452 +0000 UTC m=+61.486471946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247149 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247164 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247173 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247195 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:01.247188236 +0000 UTC m=+61.486632730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247237 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247257 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:46.247249188 +0000 UTC m=+46.486693682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247287 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.247305 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:01.247299859 +0000 UTC m=+61.486744343 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.292642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.292852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.292944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.293007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.293099 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.395949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.396008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.396026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.396105 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.396125 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.413515 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.413565 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.414110 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:45 crc kubenswrapper[4772]: E1122 10:38:45.414573 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.499147 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.499202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.499223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.499283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.499312 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.602916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.602979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.603001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.603028 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.603106 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.706952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.707012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.707025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.707064 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.707078 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.810827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.810900 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.810923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.810956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.810977 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.914674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.914742 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.914761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.914789 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:45 crc kubenswrapper[4772]: I1122 10:38:45.914811 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:45Z","lastTransitionTime":"2025-11-22T10:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.018109 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.018228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.018377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.018460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.018480 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.122265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.122796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.122901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.123071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.123204 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.225285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.225517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.225623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.225697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.225757 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.259452 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:46 crc kubenswrapper[4772]: E1122 10:38:46.259656 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:46 crc kubenswrapper[4772]: E1122 10:38:46.259720 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:48.259698444 +0000 UTC m=+48.499142938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.329276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.329335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.329348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.329369 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.329381 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.412634 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:46 crc kubenswrapper[4772]: E1122 10:38:46.412781 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.413153 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:46 crc kubenswrapper[4772]: E1122 10:38:46.413216 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.432096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.432163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.432173 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.432188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.432204 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.534291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.534331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.534340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.534354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.534365 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.636455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.636494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.636503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.636518 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.636530 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.723718 4772 generic.go:334] "Generic (PLEG): container finished" podID="e11c7f86-73db-4015-9fe5-c0b5047c19a0" containerID="df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c" exitCode=0 Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.723788 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerDied","Data":"df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.739140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.739219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.739238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.739274 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.739292 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.742763 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.762913 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.777329 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.791844 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.813221 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.825587 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.839682 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.843393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.843428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.843437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.843453 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.843467 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.853079 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.866421 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.884409 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.898368 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.913094 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.933444 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.947027 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.947323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.947401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.947479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.947544 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:46Z","lastTransitionTime":"2025-11-22T10:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.957605 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.972563 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:46 crc kubenswrapper[4772]: I1122 10:38:46.986735 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:46Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.049813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.049852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.049862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.049879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.049890 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.151978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.152011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.152020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.152034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.152063 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.255112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.255161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.255176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.255200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.255226 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.358280 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.358324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.358335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.358354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.358368 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.412560 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.412570 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:47 crc kubenswrapper[4772]: E1122 10:38:47.412791 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:47 crc kubenswrapper[4772]: E1122 10:38:47.412855 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.462090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.462132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.462143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.462161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.462173 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.566304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.566399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.566422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.566456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.566486 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.669182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.669248 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.669265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.669289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.669308 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.735274 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" event={"ID":"e11c7f86-73db-4015-9fe5-c0b5047c19a0","Type":"ContainerStarted","Data":"901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.737703 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/0.log" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.741819 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146" exitCode=1 Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.741869 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.742694 4772 scope.go:117] "RemoveContainer" containerID="b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.760334 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.772213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.772301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.772329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.772370 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.772422 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.795841 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.814745 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.834897 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.850103 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.864218 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.876242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.876279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.876291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.876310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.876327 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.877873 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.892398 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.904435 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.915346 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.926532 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.937072 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.950452 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.964302 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.979490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.979530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.979538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.979554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.979563 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:47Z","lastTransitionTime":"2025-11-22T10:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:47 crc kubenswrapper[4772]: I1122 10:38:47.985542 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.000431 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:47Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.014001 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.035514 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"message\\\":\\\"val\\\\nI1122 10:38:47.184370 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 10:38:47.184401 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 10:38:47.184414 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:38:47.184421 6004 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:38:47.184512 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 10:38:47.185345 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 10:38:47.185380 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:38:47.185405 6004 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:38:47.185415 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:38:47.185432 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:38:47.185438 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:38:47.185455 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:38:47.185478 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:38:47.185480 6004 factory.go:656] Stopping watch factory\\\\nI1122 10:38:47.185494 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 10:38:47.185508 6004 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.047262 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.057512 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.072828 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.082805 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.082840 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.082849 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.082863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.082873 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.085641 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.101069 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.116431 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.130836 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.142101 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.155556 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.168074 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.186290 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.186906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.187017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.187040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.187088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.187108 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.205199 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.222094 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.237132 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.281365 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:48 crc kubenswrapper[4772]: E1122 10:38:48.281553 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:48 crc kubenswrapper[4772]: E1122 10:38:48.281808 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:38:52.281789899 +0000 UTC m=+52.521234393 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.290195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.290224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.290231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.290245 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.290255 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.393032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.393159 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.393174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.393194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.393208 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.413264 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.413320 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:48 crc kubenswrapper[4772]: E1122 10:38:48.413392 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:48 crc kubenswrapper[4772]: E1122 10:38:48.413469 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.496003 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.496063 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.496076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.496095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.496107 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.598728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.598758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.598769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.598785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.598795 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.701523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.701573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.701587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.701606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.701620 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.746009 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/0.log" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.748727 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.749083 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.762175 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.780276 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"message\\\":\\\"val\\\\nI1122 10:38:47.184370 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 10:38:47.184401 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 10:38:47.184414 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:38:47.184421 6004 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:38:47.184512 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 10:38:47.185345 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 10:38:47.185380 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:38:47.185405 6004 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:38:47.185415 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:38:47.185432 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:38:47.185438 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:38:47.185455 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:38:47.185478 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:38:47.185480 6004 factory.go:656] Stopping watch factory\\\\nI1122 10:38:47.185494 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 10:38:47.185508 6004 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.792323 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.803155 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.803929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.803956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.803965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.803978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.803988 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.816029 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.831186 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.843017 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.854088 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.866938 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.878594 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.890747 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.900674 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.907240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.907299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.907311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.907329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.907340 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:48Z","lastTransitionTime":"2025-11-22T10:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.912656 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.923004 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.933202 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:48 crc kubenswrapper[4772]: I1122 10:38:48.942336 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:48Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.009728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.009759 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.009767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.009781 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.009792 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.112449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.112480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.112488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.112503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.112512 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.156302 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.156346 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.156363 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.156383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.156398 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.180826 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.186155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.186240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.186266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.186293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.186312 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.204615 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.210325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.210382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.210395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.210418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.210434 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.225263 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.229494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.229558 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.229570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.229588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.229603 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.250174 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.254952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.254984 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.254996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.255017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.255031 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.270942 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.271128 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.273376 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.273433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.273447 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.273467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.273481 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.376011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.376109 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.376123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.376143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.376156 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.413293 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.413386 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.413431 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.413533 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.480198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.480249 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.480260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.480278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.480291 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.583406 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.583471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.583488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.583512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.583530 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.686732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.686818 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.686845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.686877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.686901 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.755464 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/1.log" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.756187 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/0.log" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.760016 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724" exitCode=1 Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.760079 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.760128 4772 scope.go:117] "RemoveContainer" containerID="b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.760925 4772 scope.go:117] "RemoveContainer" containerID="f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724" Nov 22 10:38:49 crc kubenswrapper[4772]: E1122 10:38:49.761177 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.783687 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.789516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.789561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.789579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.789602 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.789622 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.809350 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.836297 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.858466 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.871012 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.888280 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"message\\\":\\\"val\\\\nI1122 10:38:47.184370 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 10:38:47.184401 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 10:38:47.184414 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:38:47.184421 6004 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:38:47.184512 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 10:38:47.185345 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 10:38:47.185380 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:38:47.185405 6004 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:38:47.185415 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:38:47.185432 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:38:47.185438 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:38:47.185455 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:38:47.185478 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:38:47.185480 6004 factory.go:656] Stopping watch factory\\\\nI1122 10:38:47.185494 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 10:38:47.185508 6004 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.891712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.891750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.891762 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.891890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.891908 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.902853 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.915030 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.925759 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.936419 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.947844 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.958289 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.970495 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.981290 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.994418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.994456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.994466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.994482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.994497 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:49Z","lastTransitionTime":"2025-11-22T10:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:49 crc kubenswrapper[4772]: I1122 10:38:49.995524 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:49Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.008981 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.096837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.096879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.096891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.096907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.096917 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.199263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.199292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.199302 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.199319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.199329 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.301541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.301575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.301583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.301597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.301609 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.404015 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.404069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.404081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.404101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.404111 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.413204 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.413223 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:50 crc kubenswrapper[4772]: E1122 10:38:50.413302 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:50 crc kubenswrapper[4772]: E1122 10:38:50.413397 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.413884 4772 scope.go:117] "RemoveContainer" containerID="da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.507335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.507371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.507380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.507397 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.507407 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.609822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.609865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.609877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.609897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.609910 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.653346 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.664515 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.671098 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.685636 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.699999 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.713678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.713767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.713792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.713822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.713841 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.714983 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.736384 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.752295 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.767280 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.768553 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.770529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.771258 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.774459 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/1.log" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.778299 4772 scope.go:117] "RemoveContainer" containerID="f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724" Nov 22 10:38:50 crc kubenswrapper[4772]: E1122 10:38:50.778572 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.784491 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.801384 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.817101 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.820460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.820500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.820513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.820532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.820544 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.834582 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.850754 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.864472 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.886397 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60f034e3e012ea5ce1371b4bfbbda3cd6a2466e7e8b8b4106b5fe5f3a14d146\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"message\\\":\\\"val\\\\nI1122 10:38:47.184370 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 10:38:47.184401 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 10:38:47.184414 6004 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:38:47.184421 6004 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:38:47.184512 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 10:38:47.185345 6004 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 10:38:47.185380 6004 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:38:47.185405 6004 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:38:47.185415 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:38:47.185432 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:38:47.185438 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:38:47.185455 6004 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:38:47.185478 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:38:47.185480 6004 factory.go:656] Stopping watch factory\\\\nI1122 10:38:47.185494 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI1122 10:38:47.185508 6004 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.898580 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.911949 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.924354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.924400 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.924410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.924428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.924437 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:50Z","lastTransitionTime":"2025-11-22T10:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.925664 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.938997 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.959612 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.970908 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.984629 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:50 crc kubenswrapper[4772]: I1122 10:38:50.998623 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:50Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.011471 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.023038 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.027148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.027220 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.027244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.027273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.027291 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.040380 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.055242 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.071115 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.085493 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.099185 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.116316 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.129523 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.130397 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.130446 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.130457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.130483 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.130497 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.144450 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.160383 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.233030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.233101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.233136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.233151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.233161 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.342606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.342644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.342656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.342674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.342686 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.413484 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:51 crc kubenswrapper[4772]: E1122 10:38:51.413618 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.413710 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:51 crc kubenswrapper[4772]: E1122 10:38:51.413917 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.429810 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.445367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.445416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.445427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.445444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.445457 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.452867 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.466722 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.479668 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.493966 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.507720 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.521721 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.535684 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.548224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.548275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.548286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.548299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.548308 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.551419 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.571502 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.584831 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.599572 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.610866 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.626062 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.643329 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.651102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.651173 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.651185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.651205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.651217 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.656777 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.668795 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.753789 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.753864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.753879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.753948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.753964 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.856454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.856520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.856536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.856569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.856583 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.959204 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.959271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.959283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.959303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:51 crc kubenswrapper[4772]: I1122 10:38:51.959316 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:51Z","lastTransitionTime":"2025-11-22T10:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.062382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.062423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.062432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.062449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.062460 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.165708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.165763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.165777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.165795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.165806 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.268158 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.268220 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.268233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.268253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.268266 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.323973 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:52 crc kubenswrapper[4772]: E1122 10:38:52.324167 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:52 crc kubenswrapper[4772]: E1122 10:38:52.324225 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:00.324207385 +0000 UTC m=+60.563651879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.370602 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.370661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.370674 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.370693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.370706 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.413467 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.413492 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:52 crc kubenswrapper[4772]: E1122 10:38:52.413792 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:52 crc kubenswrapper[4772]: E1122 10:38:52.413958 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.473038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.473109 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.473122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.473142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.473155 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.575881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.576209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.576279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.576357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.576432 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.679393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.679479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.679501 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.679534 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.679557 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.782442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.782478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.782487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.782500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.782509 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.884782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.884834 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.884850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.884870 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.884884 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.987601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.987630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.987639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.987653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:52 crc kubenswrapper[4772]: I1122 10:38:52.987677 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:52Z","lastTransitionTime":"2025-11-22T10:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.090400 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.090429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.090437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.090452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.090461 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.193803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.193847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.194076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.194099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.194108 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.296119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.296196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.296208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.296223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.296231 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.400002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.400102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.400127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.400155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.400177 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.412736 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.412800 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:53 crc kubenswrapper[4772]: E1122 10:38:53.412950 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:53 crc kubenswrapper[4772]: E1122 10:38:53.413150 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.503745 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.503804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.503816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.503835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.503848 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.607121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.607196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.607215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.607241 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.607258 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.710291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.710358 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.710377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.710407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.710425 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.813985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.814067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.814079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.814093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.814105 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.917504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.917556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.917576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.917598 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:53 crc kubenswrapper[4772]: I1122 10:38:53.917615 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:53Z","lastTransitionTime":"2025-11-22T10:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.021187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.021233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.021241 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.021255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.021268 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.124347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.124392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.124403 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.124419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.124436 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.227568 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.227607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.227616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.227630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.227640 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.331162 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.331235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.331258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.331291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.331315 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.412900 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.412922 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:54 crc kubenswrapper[4772]: E1122 10:38:54.413144 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:54 crc kubenswrapper[4772]: E1122 10:38:54.413183 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.433623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.433659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.433667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.433682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.433692 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.536688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.537093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.537292 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.537457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.537580 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.641137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.641193 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.641208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.641232 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.641248 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.744559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.744615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.744631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.744655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.744674 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.846597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.846636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.846645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.846659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.846670 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.949685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.949761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.949779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.949808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:54 crc kubenswrapper[4772]: I1122 10:38:54.949828 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:54Z","lastTransitionTime":"2025-11-22T10:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.052706 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.052743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.052752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.052767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.052779 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.155221 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.155253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.155261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.155275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.155284 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.258329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.258394 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.258405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.258423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.258439 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.360771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.360812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.360824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.360843 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.360854 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.412844 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.412862 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:55 crc kubenswrapper[4772]: E1122 10:38:55.413030 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:55 crc kubenswrapper[4772]: E1122 10:38:55.413171 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.463479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.463525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.463536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.463552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.463561 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.565701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.565732 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.565741 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.565754 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.565762 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.669035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.669117 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.669137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.669160 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.669177 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.771982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.772026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.772063 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.772085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.772101 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.875589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.875678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.875703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.875736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.875761 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.978765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.978821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.978841 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.978863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:55 crc kubenswrapper[4772]: I1122 10:38:55.978877 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:55Z","lastTransitionTime":"2025-11-22T10:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.080933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.081000 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.081017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.081069 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.081087 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.182994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.183060 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.183070 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.183085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.183097 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.285422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.285496 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.285517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.285552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.285587 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.389223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.389286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.389307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.389329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.389342 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.413095 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.413213 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:56 crc kubenswrapper[4772]: E1122 10:38:56.413280 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:56 crc kubenswrapper[4772]: E1122 10:38:56.413396 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.491499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.491535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.491546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.491561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.491572 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.595033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.595140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.595163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.595190 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.595210 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.698132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.698237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.698258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.698288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.698310 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.800579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.800952 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.801170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.801449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.801654 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.903909 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.903953 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.903965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.903980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:56 crc kubenswrapper[4772]: I1122 10:38:56.903990 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:56Z","lastTransitionTime":"2025-11-22T10:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.006413 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.006484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.006506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.006539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.006562 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.110042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.110584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.110608 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.110639 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.110661 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.217190 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.217296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.217319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.217348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.217370 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.320356 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.320426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.320450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.320480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.320503 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.413360 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.413435 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:57 crc kubenswrapper[4772]: E1122 10:38:57.413556 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:57 crc kubenswrapper[4772]: E1122 10:38:57.413780 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.422944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.422994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.423011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.423034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.423080 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.526148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.526224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.526242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.526261 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.526272 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.628577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.628614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.628623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.628638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.628647 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.732343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.732403 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.732425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.732456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.732480 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.836385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.836455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.836476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.836503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.836521 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.939848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.939904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.939919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.939944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:57 crc kubenswrapper[4772]: I1122 10:38:57.939962 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:57Z","lastTransitionTime":"2025-11-22T10:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.044791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.044888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.044914 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.044953 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.044978 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.150665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.150825 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.150850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.150926 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.150948 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.254938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.254983 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.254995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.255014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.255024 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.358239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.358302 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.358312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.358329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.358337 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.413160 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:38:58 crc kubenswrapper[4772]: E1122 10:38:58.413316 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.413713 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:38:58 crc kubenswrapper[4772]: E1122 10:38:58.413888 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.462907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.462958 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.462976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.462997 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.463010 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.567320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.567550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.567575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.567601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.567656 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.671862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.671921 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.671941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.671965 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.671979 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.774836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.774889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.774903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.774925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.774938 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.877868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.877915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.877926 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.877945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.877956 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.981408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.981448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.981457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.981475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:58 crc kubenswrapper[4772]: I1122 10:38:58.981485 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:58Z","lastTransitionTime":"2025-11-22T10:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.084884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.084941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.084969 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.084993 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.085010 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.188024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.188096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.188112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.188134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.188149 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.290712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.290775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.290787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.290806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.290819 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.393671 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.393731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.393750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.393777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.393797 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.413182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.413313 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.413379 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.413525 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.496676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.496745 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.496764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.496790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.496807 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.502130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.502171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.502182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.502199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.502209 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.514444 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:59Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.517531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.517559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.517567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.517580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.517589 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.530110 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:59Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.536835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.536916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.537546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.537696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.537722 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.552551 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:59Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.557172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.557203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.557212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.557228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.557239 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.574297 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:59Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.578687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.578739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.578755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.578774 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.578788 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.592638 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:38:59Z is after 2025-08-24T17:21:41Z" Nov 22 10:38:59 crc kubenswrapper[4772]: E1122 10:38:59.592757 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.599787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.599833 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.599851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.599869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.599911 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.702618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.702695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.702714 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.702740 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.702758 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.805488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.805620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.805634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.805652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.805666 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.908139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.908186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.908199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.908219 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:38:59 crc kubenswrapper[4772]: I1122 10:38:59.908232 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:38:59Z","lastTransitionTime":"2025-11-22T10:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.011431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.011482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.011498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.011523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.011540 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.114911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.114971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.114988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.115013 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.115031 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.217599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.217652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.217668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.217693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.217710 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.319614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.319679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.319698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.319724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.319745 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.406749 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:00 crc kubenswrapper[4772]: E1122 10:39:00.406912 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:39:00 crc kubenswrapper[4772]: E1122 10:39:00.406979 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:16.406959421 +0000 UTC m=+76.646403915 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.412787 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.412812 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:00 crc kubenswrapper[4772]: E1122 10:39:00.413092 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:00 crc kubenswrapper[4772]: E1122 10:39:00.413168 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.422498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.422531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.422556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.422571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.422585 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.526332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.526411 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.526427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.526455 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.526474 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.629515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.629552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.629560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.629572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.629581 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.733252 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.733309 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.733325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.733348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.733365 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.835486 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.835528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.835537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.835553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.835564 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.937903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.938010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.938038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.938117 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:00 crc kubenswrapper[4772]: I1122 10:39:00.938143 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:00Z","lastTransitionTime":"2025-11-22T10:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.042542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.042612 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.042642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.042661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.042680 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.144556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.144596 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.144607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.144626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.144637 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.216406 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.216609 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:39:33.216580141 +0000 UTC m=+93.456024635 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.247475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.247514 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.247523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.247537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.247546 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.317607 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.317712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.317764 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.317808 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.317822 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.317880 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.317898 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.317912 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.317929 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.317968 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:33.317944778 +0000 UTC m=+93.557389282 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.317997 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:33.317985699 +0000 UTC m=+93.557430293 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.318014 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:33.318005399 +0000 UTC m=+93.557450043 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.318023 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.318103 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.318125 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.318203 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:33.318177884 +0000 UTC m=+93.557622418 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.349823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.349879 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.349895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.349920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.349937 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.413244 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.413263 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.413473 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:01 crc kubenswrapper[4772]: E1122 10:39:01.413636 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.435752 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.447902 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.451542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.451575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.451586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.451600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.451610 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.458998 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.467680 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.476025 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.486726 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.505882 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.515658 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.529757 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.540333 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.553357 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.553738 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.553767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.553775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.553807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.553815 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.568260 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.581145 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.592691 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.607386 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.617219 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.628935 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:01Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.655794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.655833 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.655841 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.655858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.655869 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.757782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.757826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.757838 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.757854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.757864 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.860038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.860107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.860115 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.860132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.860143 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.962289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.962347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.962365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.962383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:01 crc kubenswrapper[4772]: I1122 10:39:01.962396 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:01Z","lastTransitionTime":"2025-11-22T10:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.064297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.064337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.064347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.064363 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.064373 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.167112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.167166 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.167184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.167206 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.167223 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.269860 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.269906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.269947 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.269970 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.270002 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.372592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.372657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.372666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.372678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.372689 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.413305 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:02 crc kubenswrapper[4772]: E1122 10:39:02.413679 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.413472 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:02 crc kubenswrapper[4772]: E1122 10:39:02.414314 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.475756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.475800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.475810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.475827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.475839 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.578515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.578683 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.578757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.578787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.578809 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.682679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.682763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.682787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.682821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.682846 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.785381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.785427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.785440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.785461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.785473 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.888416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.888462 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.888470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.888488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.888497 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.990988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.991063 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.991077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.991122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:02 crc kubenswrapper[4772]: I1122 10:39:02.991136 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:02Z","lastTransitionTime":"2025-11-22T10:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.093754 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.093783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.093792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.093806 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.093818 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.197018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.197113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.197134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.197165 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.197187 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.300848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.300919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.300941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.300971 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.300992 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.404809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.404885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.404903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.404933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.404953 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.413132 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.413131 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:03 crc kubenswrapper[4772]: E1122 10:39:03.413350 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:03 crc kubenswrapper[4772]: E1122 10:39:03.413502 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.508746 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.508809 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.508821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.508845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.508861 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.612136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.612231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.612247 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.612272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.612288 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.714975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.715020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.715031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.715094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.715106 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.818570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.818656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.818681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.818735 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.818759 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.922487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.922549 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.922566 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.922587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:03 crc kubenswrapper[4772]: I1122 10:39:03.922601 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:03Z","lastTransitionTime":"2025-11-22T10:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.025285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.025362 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.025372 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.025394 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.025410 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.128665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.128725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.128734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.128756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.128775 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.232329 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.232410 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.232431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.232462 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.232482 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.335767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.335816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.335825 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.335839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.335849 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.413365 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.413449 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:04 crc kubenswrapper[4772]: E1122 10:39:04.413615 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:04 crc kubenswrapper[4772]: E1122 10:39:04.413803 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.438929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.439014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.439036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.439114 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.439138 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.543134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.543203 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.543223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.543251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.543268 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.646159 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.646248 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.646268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.646296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.646313 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.749215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.749278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.749321 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.749355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.749382 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.854768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.854823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.854838 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.854858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.854902 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.961409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.961452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.961467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.961981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:04 crc kubenswrapper[4772]: I1122 10:39:04.961993 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:04Z","lastTransitionTime":"2025-11-22T10:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.064599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.064648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.064660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.064678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.064689 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.166880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.166915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.166926 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.166943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.166954 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.269909 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.269955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.269970 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.269989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.270001 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.372847 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.372888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.372899 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.372916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.372927 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.413226 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.413258 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:05 crc kubenswrapper[4772]: E1122 10:39:05.413417 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:05 crc kubenswrapper[4772]: E1122 10:39:05.413597 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.475715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.475853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.475874 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.475898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.475955 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.578695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.578758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.578781 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.578810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.578833 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.680814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.680853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.680862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.680876 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.680886 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.782693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.782741 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.782752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.782769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.782781 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.885246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.885285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.885294 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.885311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.885320 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.987830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.987866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.987875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.987890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:05 crc kubenswrapper[4772]: I1122 10:39:05.987899 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:05Z","lastTransitionTime":"2025-11-22T10:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.090727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.090795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.090817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.090844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.090866 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.193520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.193558 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.193569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.193583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.193594 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.296350 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.296389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.296402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.296418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.296429 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.399196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.399235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.399245 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.399263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.399275 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.412684 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.412712 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:06 crc kubenswrapper[4772]: E1122 10:39:06.412913 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:06 crc kubenswrapper[4772]: E1122 10:39:06.413476 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.414033 4772 scope.go:117] "RemoveContainer" containerID="f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.502889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.503062 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.503074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.503093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.503104 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.606077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.606115 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.606154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.606171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.606181 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.708634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.708665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.708673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.708684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.708709 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.811497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.811541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.811553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.811579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.811592 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.826855 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/1.log" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.834389 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.835161 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.854438 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.870455 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.884534 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.899848 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.915091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.915175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.915187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.915208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.915222 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:06Z","lastTransitionTime":"2025-11-22T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.921377 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.932106 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.942764 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.955762 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.970375 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.980946 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:06 crc kubenswrapper[4772]: I1122 10:39:06.993294 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:06Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.007829 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.017967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.017994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.018002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.018016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.018026 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.029897 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.044659 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.069124 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.084311 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.100629 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.120263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.120318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.120335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.120355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.120369 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.223172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.223223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.223236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.223255 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.223268 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.325490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.325539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.325553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.325571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.325582 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.413356 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.413686 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:07 crc kubenswrapper[4772]: E1122 10:39:07.413822 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:07 crc kubenswrapper[4772]: E1122 10:39:07.414010 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.427768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.428021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.428167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.428286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.428427 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.531564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.531607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.531616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.531631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.531643 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.633895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.633944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.633972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.633992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.634003 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.736208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.736258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.736270 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.736288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.736301 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.838731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.838779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.838793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.838814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.838830 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.838893 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/2.log" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.839533 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/1.log" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.842290 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c" exitCode=1 Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.842320 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.842352 4772 scope.go:117] "RemoveContainer" containerID="f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.843140 4772 scope.go:117] "RemoveContainer" containerID="a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c" Nov 22 10:39:07 crc kubenswrapper[4772]: E1122 10:39:07.843577 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.860980 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.879094 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.891280 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.901883 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.913999 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.937131 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.941751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.941786 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.941796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.941810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.941818 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:07Z","lastTransitionTime":"2025-11-22T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.948579 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.964089 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.980609 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:07 crc kubenswrapper[4772]: I1122 10:39:07.996801 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:07Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.008254 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.022924 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.036131 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.043405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.043461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.043473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.043490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.043500 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.052336 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.065019 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.079117 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.093787 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.145824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.145869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.145884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.145900 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.145910 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.248948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.248996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.249008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.249025 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.249038 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.291453 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.303471 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.314535 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.326475 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.344767 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f441beed4d46d40f9fb0368468d7aae7721c17a369b5edd3b5946438cb89c724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:38:49Z\\\",\\\"message\\\":\\\"e]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813284 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 10:38:48.813329 6257 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1122 10:38:48.813440 6257 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 10:38:48.813477 6257 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:38:48.813495 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:38:48.813544 6257 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.351590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.351631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.351642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.351662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.351673 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.356132 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.371400 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.384424 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.395728 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.404435 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.412795 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.412819 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:08 crc kubenswrapper[4772]: E1122 10:39:08.413024 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:08 crc kubenswrapper[4772]: E1122 10:39:08.413137 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.416998 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.428952 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.441421 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.451550 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.453987 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.454027 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.454038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.454071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.454084 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.466549 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.481333 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.494779 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.509178 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.556752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.556793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.556802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.556817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.556827 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.659079 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.659121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.659130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.659146 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.659156 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.761991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.762033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.762067 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.762087 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.762101 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.846806 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/2.log" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.850148 4772 scope.go:117] "RemoveContainer" containerID="a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c" Nov 22 10:39:08 crc kubenswrapper[4772]: E1122 10:39:08.850365 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.863891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.863943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.863962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.863988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.864008 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.874182 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.886104 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.898659 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.910152 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.921140 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.930341 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.941547 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.953157 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.966094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.966127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.966138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.966154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.966166 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:08Z","lastTransitionTime":"2025-11-22T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.966689 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.977513 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.988565 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:08 crc kubenswrapper[4772]: I1122 10:39:08.998760 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:08Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.008886 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.024153 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.033567 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.042230 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.051849 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.068944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.068980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.068991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.069006 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.069018 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.171466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.171503 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.171513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.171531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.171542 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.274731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.274771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.274779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.274791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.274800 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.377765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.378319 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.378381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.378442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.378521 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.412420 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.412420 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.412725 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.412648 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.481378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.481423 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.481434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.481450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.481483 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.583226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.583258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.583266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.583279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.583287 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.685934 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.685980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.685991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.686007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.686019 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.788611 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.788645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.788656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.788669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.788677 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.891431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.891489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.891500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.891517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.891529 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.900375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.900406 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.900418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.900431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.900443 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.911771 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.915573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.915610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.915623 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.915638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.915649 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.925617 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.929118 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.929156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.929170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.929185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.929197 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.941036 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.944861 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.944888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.944899 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.944914 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.944927 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.956126 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.961792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.961822 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.961831 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.961844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.961854 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.972767 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:09Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:09 crc kubenswrapper[4772]: E1122 10:39:09.972891 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.994424 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.994454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.994466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.994481 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:09 crc kubenswrapper[4772]: I1122 10:39:09.994492 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:09Z","lastTransitionTime":"2025-11-22T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.097390 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.097426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.097436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.097452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.097463 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.199658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.199704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.199715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.199729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.199739 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.302895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.302940 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.302950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.302968 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.302981 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.406197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.406239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.406251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.406267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.406278 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.412440 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.412475 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:10 crc kubenswrapper[4772]: E1122 10:39:10.412544 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:10 crc kubenswrapper[4772]: E1122 10:39:10.412633 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.509201 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.509425 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.509485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.509589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.509653 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.612115 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.612156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.612164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.612179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.612188 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.714299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.714370 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.714393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.714422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.714443 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.816547 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.816585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.816595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.816611 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.816620 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.919022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.919068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.919077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.919090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:10 crc kubenswrapper[4772]: I1122 10:39:10.919099 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:10Z","lastTransitionTime":"2025-11-22T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.021617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.021667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.021678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.021697 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.021709 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.125290 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.125597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.126137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.126353 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.126558 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.229434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.229485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.229495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.229540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.229553 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.332216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.332243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.332251 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.332264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.332273 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.412495 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:11 crc kubenswrapper[4772]: E1122 10:39:11.412624 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.413311 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:11 crc kubenswrapper[4772]: E1122 10:39:11.413630 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.428579 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.434012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.434063 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.434073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.434088 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.434099 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.440746 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.458308 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.469731 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.479986 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.493969 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.506728 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.517297 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.530433 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.536362 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.536385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.536392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.536404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.536413 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.543689 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.555297 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.570277 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.584661 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.596303 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.611248 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.624193 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.636299 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:11Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.639386 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.639415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.639424 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.639440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.639449 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.741698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.741745 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.741753 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.741768 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.741777 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.844479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.844756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.844765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.844779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.844790 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.947205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.947265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.947284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.947311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:11 crc kubenswrapper[4772]: I1122 10:39:11.947329 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:11Z","lastTransitionTime":"2025-11-22T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.050173 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.050212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.050224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.050240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.050250 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.153638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.153688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.153700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.153721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.153733 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.256381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.256434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.256442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.256458 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.256468 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.360005 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.360094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.360113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.360132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.360144 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.412992 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.412992 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:12 crc kubenswrapper[4772]: E1122 10:39:12.413177 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:12 crc kubenswrapper[4772]: E1122 10:39:12.413377 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.462569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.462626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.462638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.462656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.462668 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.564659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.564702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.564714 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.564731 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.564745 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.667635 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.667708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.667723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.667745 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.667759 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.770891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.770943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.770956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.770979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.770993 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.873495 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.873543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.873553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.873569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.873580 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.976569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.976604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.976615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.976630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:12 crc kubenswrapper[4772]: I1122 10:39:12.976640 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:12Z","lastTransitionTime":"2025-11-22T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.080472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.080523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.080536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.080556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.080570 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.182777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.182823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.182835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.182851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.182862 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.285827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.285886 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.285907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.285935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.285957 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.389811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.389868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.389885 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.389911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.389926 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.412863 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.412878 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:13 crc kubenswrapper[4772]: E1122 10:39:13.413187 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:13 crc kubenswrapper[4772]: E1122 10:39:13.413291 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.493015 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.493109 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.493124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.493148 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.493164 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.596615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.596682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.596699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.596723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.596741 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.699956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.700001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.700014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.700035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.700066 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.803687 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.803759 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.803780 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.803808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.803861 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.907689 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.907759 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.907778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.907807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:13 crc kubenswrapper[4772]: I1122 10:39:13.907826 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:13Z","lastTransitionTime":"2025-11-22T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.010897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.010969 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.010989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.011016 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.011034 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.113946 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.114007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.114030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.114454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.114509 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.217609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.217679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.217715 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.217755 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.217777 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.321599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.321686 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.321700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.321718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.321730 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.412916 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.413022 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:14 crc kubenswrapper[4772]: E1122 10:39:14.413146 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:14 crc kubenswrapper[4772]: E1122 10:39:14.413283 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.424086 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.424167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.424216 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.424238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.424252 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.528090 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.528151 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.528205 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.528267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.528286 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.631645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.631705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.631717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.631750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.631766 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.735985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.736098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.736124 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.736162 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.736190 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.839482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.839545 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.839560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.839581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.839596 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.942719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.942807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.942831 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.942863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:14 crc kubenswrapper[4772]: I1122 10:39:14.942888 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:14Z","lastTransitionTime":"2025-11-22T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.046239 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.046285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.046296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.046318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.046331 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.148468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.148528 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.148538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.148556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.148605 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.251202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.251307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.251335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.251372 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.251395 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.355873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.355923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.355941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.355969 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.355991 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.412516 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.412519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:15 crc kubenswrapper[4772]: E1122 10:39:15.412767 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:15 crc kubenswrapper[4772]: E1122 10:39:15.412853 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.459035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.459096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.459106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.459126 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.459137 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.562513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.562562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.562573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.562592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.562604 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.666386 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.666441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.666454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.666477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.666488 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.769132 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.769186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.769198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.769217 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.769230 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.871827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.871895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.871919 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.871950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.871975 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.974760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.974796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.974804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.974820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:15 crc kubenswrapper[4772]: I1122 10:39:15.974829 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:15Z","lastTransitionTime":"2025-11-22T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.077589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.077656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.077666 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.077709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.077718 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.180656 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.180695 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.180703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.180717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.180727 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.283617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.283678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.283694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.283717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.283734 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.387361 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.387421 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.387441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.387465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.387481 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.413441 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.413515 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:16 crc kubenswrapper[4772]: E1122 10:39:16.413630 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:16 crc kubenswrapper[4772]: E1122 10:39:16.413759 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.474594 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:16 crc kubenswrapper[4772]: E1122 10:39:16.474777 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:39:16 crc kubenswrapper[4772]: E1122 10:39:16.474860 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:39:48.474835221 +0000 UTC m=+108.714279745 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.489981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.490038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.490115 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.490152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.490177 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.593001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.593036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.593070 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.593092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.593108 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.695668 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.695736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.695754 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.695794 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.695812 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.798550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.798642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.798705 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.798735 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.798775 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.901626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.901799 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.901820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.901840 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:16 crc kubenswrapper[4772]: I1122 10:39:16.901855 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:16Z","lastTransitionTime":"2025-11-22T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.005296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.005365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.005381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.005405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.005423 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.109475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.109539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.109562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.109589 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.109607 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.212531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.212632 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.212655 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.212684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.212708 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.316459 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.316513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.316525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.316544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.316559 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.412535 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:17 crc kubenswrapper[4772]: E1122 10:39:17.412813 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.412928 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:17 crc kubenswrapper[4772]: E1122 10:39:17.413184 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.419508 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.419620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.419677 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.419701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.419717 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.523075 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.523144 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.523163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.523190 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.523207 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.626152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.626197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.626208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.626227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.626239 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.729724 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.729848 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.729868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.729893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.729911 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.832284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.832347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.832365 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.832390 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.832407 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.935814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.935876 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.935893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.935916 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:17 crc kubenswrapper[4772]: I1122 10:39:17.935935 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:17Z","lastTransitionTime":"2025-11-22T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.039213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.039269 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.039284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.039303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.039316 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.141997 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.142032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.142040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.142075 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.142090 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.245034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.245129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.245145 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.245169 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.245188 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.348437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.348520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.348546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.348570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.348590 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.412726 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.412726 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:18 crc kubenswrapper[4772]: E1122 10:39:18.412911 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:18 crc kubenswrapper[4772]: E1122 10:39:18.413123 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.451208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.451278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.451295 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.451318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.451341 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.553923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.554006 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.554031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.554102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.554126 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.657814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.657872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.657884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.657902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.657914 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.761982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.762110 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.762141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.762165 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.762181 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.865648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.865721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.865745 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.865779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.865800 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.969032 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.969147 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.969171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.969199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:18 crc kubenswrapper[4772]: I1122 10:39:18.969222 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:18Z","lastTransitionTime":"2025-11-22T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.072113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.072179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.072213 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.072263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.072285 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.175888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.175959 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.175977 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.176002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.176036 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.279636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.279721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.279738 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.279763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.279781 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.383317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.383415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.383441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.383471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.383504 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.413609 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:19 crc kubenswrapper[4772]: E1122 10:39:19.413807 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.413887 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:19 crc kubenswrapper[4772]: E1122 10:39:19.414013 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.486966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.487030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.487091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.487122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.487143 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.590197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.590250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.590267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.590291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.590308 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.694034 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.694119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.694130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.694149 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.694162 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.797190 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.797236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.797246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.797264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.797276 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.899542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.899592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.899601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.899617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:19 crc kubenswrapper[4772]: I1122 10:39:19.899626 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:19Z","lastTransitionTime":"2025-11-22T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.002743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.002797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.002815 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.002840 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.002858 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.105802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.105846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.105857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.105878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.105890 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.209658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.209721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.209737 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.209760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.209783 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.307411 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.307478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.307497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.307523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.307540 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.324490 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:20Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.330030 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.330099 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.330116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.330139 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.330156 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.351137 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:20Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.356073 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.356102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.356113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.356129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.356139 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.373935 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:20Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.378076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.378108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.378119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.378135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.378147 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.395880 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:20Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.401408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.401459 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.401472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.401494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.401507 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.413332 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.413738 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.413774 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.414520 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.414895 4772 scope.go:117] "RemoveContainer" containerID="a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c" Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.415211 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.418683 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:20Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:20 crc kubenswrapper[4772]: E1122 10:39:20.418826 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.421037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.421091 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.421106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.421126 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.421141 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.523343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.523388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.523399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.523416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.523426 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.625997 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.626102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.626122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.626153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.626174 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.729081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.729140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.729156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.729181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.729198 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.832779 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.833289 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.833512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.833734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.833932 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.937551 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.937621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.937634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.937679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:20 crc kubenswrapper[4772]: I1122 10:39:20.937693 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:20Z","lastTransitionTime":"2025-11-22T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.041194 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.041595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.041745 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.041895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.042034 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.145397 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.145479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.145502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.145534 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.145564 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.248981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.249082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.249104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.249129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.249148 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.352506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.352584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.352900 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.352939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.352958 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.413973 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:21 crc kubenswrapper[4772]: E1122 10:39:21.414327 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.420111 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:21 crc kubenswrapper[4772]: E1122 10:39:21.422140 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.439839 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.457023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.457133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.457152 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.457183 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.457204 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.476909 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.494351 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.515488 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.536111 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.553360 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.560428 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.560487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.560542 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.560571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.560589 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.578390 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.602036 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.628130 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.651846 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.665616 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.665690 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.665708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.665738 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.665763 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.677106 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.699278 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.718972 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.747628 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.767368 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.769364 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.769439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.769464 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.769493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.769516 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.786475 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.807272 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:21Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.873608 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.873648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.873661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.873684 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:21 crc kubenswrapper[4772]: I1122 10:39:21.873700 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:21Z","lastTransitionTime":"2025-11-22T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.010426 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.010472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.010485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.010507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.010525 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.113473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.113540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.113563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.113591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.113609 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.217544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.217598 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.217610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.217636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.217650 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.320696 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.320756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.320773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.320797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.320815 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.412878 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.412886 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:22 crc kubenswrapper[4772]: E1122 10:39:22.413146 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:22 crc kubenswrapper[4772]: E1122 10:39:22.413424 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.423353 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.423404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.423422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.423443 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.423460 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.526145 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.526180 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.526189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.526202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.526211 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.629318 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.629391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.629409 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.629435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.629461 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.732096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.732179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.732200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.732230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.732252 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.835853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.835898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.835908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.835925 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.835934 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.939123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.939176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.939188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.939206 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:22 crc kubenswrapper[4772]: I1122 10:39:22.939215 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:22Z","lastTransitionTime":"2025-11-22T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.042926 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.042993 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.043012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.043042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.043132 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.146962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.147129 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.147162 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.147199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.147226 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.250664 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.250760 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.250788 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.250827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.250854 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.354617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.354740 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.354795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.354834 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.354863 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.413283 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.413434 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:23 crc kubenswrapper[4772]: E1122 10:39:23.413514 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:23 crc kubenswrapper[4772]: E1122 10:39:23.413695 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.458401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.458736 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.458878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.459107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.459337 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.562980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.563084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.563104 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.563133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.563154 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.667040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.667138 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.667156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.667193 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.667216 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.769937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.770437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.770641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.770821 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.770973 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.874892 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.875199 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.875363 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.875536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.875679 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.979897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.979981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.979999 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.980031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:23 crc kubenswrapper[4772]: I1122 10:39:23.980090 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:23Z","lastTransitionTime":"2025-11-22T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.083170 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.083246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.083269 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.083301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.083323 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.186512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.187038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.187227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.187416 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.187578 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.291026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.291143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.291166 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.291197 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.291218 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.395343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.395412 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.395431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.395468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.395511 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.412994 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:24 crc kubenswrapper[4772]: E1122 10:39:24.413175 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.413406 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:24 crc kubenswrapper[4772]: E1122 10:39:24.413798 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.499258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.499306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.499317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.499334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.499345 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.602800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.602858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.602875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.602906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.602930 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.706722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.706864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.706888 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.706920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.706953 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.810969 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.811078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.811106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.811142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.811170 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.914519 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.914605 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.914630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.914663 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:24 crc kubenswrapper[4772]: I1122 10:39:24.914687 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:24Z","lastTransitionTime":"2025-11-22T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.018678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.018758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.018778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.018811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.018836 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.122405 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.122461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.122472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.122490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.122504 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.225678 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.225746 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.225758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.225787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.225808 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.329667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.329740 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.329757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.329787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.329811 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.413255 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:25 crc kubenswrapper[4772]: E1122 10:39:25.413452 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.413554 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:25 crc kubenswrapper[4772]: E1122 10:39:25.413799 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.433713 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.433784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.433804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.433830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.433851 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.538492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.538577 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.538605 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.538648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.538678 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.643185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.643264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.643277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.643301 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.643319 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.746444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.746525 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.746544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.746574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.746596 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.849966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.850146 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.850236 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.850316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.850341 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.913714 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/0.log" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.913809 4772 generic.go:334] "Generic (PLEG): container finished" podID="d73fd58d-561a-4b16-9f9d-49ae966edb24" containerID="4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0" exitCode=1 Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.913865 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerDied","Data":"4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.914604 4772 scope.go:117] "RemoveContainer" containerID="4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.941105 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:25Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.955355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.956156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.956324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.956474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.956636 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:25Z","lastTransitionTime":"2025-11-22T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.967501 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:25Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:25 crc kubenswrapper[4772]: I1122 10:39:25.982936 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:25Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.005893 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.028244 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.049521 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.060105 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.060208 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.060237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.060273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.060293 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.065075 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.087393 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.102559 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.125938 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.145395 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.163560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.163609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.163622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.163642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.163658 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.164404 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.178353 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.196923 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.222588 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.235574 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.247695 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.267647 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.267708 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.267725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.267752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.267774 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.370928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.370997 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.371021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.371082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.371111 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.413374 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.413405 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:26 crc kubenswrapper[4772]: E1122 10:39:26.414086 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:26 crc kubenswrapper[4772]: E1122 10:39:26.414083 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.474784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.474863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.474881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.474908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.474929 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.581004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.581569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.581754 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.581906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.582082 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.685752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.685825 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.685843 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.685880 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.685923 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.790035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.790186 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.790209 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.790242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.790266 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.894265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.894339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.894357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.894401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.894425 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.921470 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/0.log" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.921587 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerStarted","Data":"3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c"} Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.945606 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.967217 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.987322 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:26Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.998903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.998956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.998988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.999008 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:26 crc kubenswrapper[4772]: I1122 10:39:26.999022 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:26Z","lastTransitionTime":"2025-11-22T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.011221 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.045273 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.061749 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.079937 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.101942 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.102588 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.102711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.102747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.102784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.102813 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.122032 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.140101 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.158782 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.181110 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.204508 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.206828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.206864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.206878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.206902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.206918 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.221416 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.237210 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.257604 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.273905 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:27Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.310984 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.311083 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.311105 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.311178 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.311212 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.412828 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.412983 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:27 crc kubenswrapper[4772]: E1122 10:39:27.413184 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:27 crc kubenswrapper[4772]: E1122 10:39:27.413343 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.415849 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.415984 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.416092 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.416134 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.416190 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.433126 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.520434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.520490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.520508 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.520534 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.520553 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.623991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.624168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.624191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.624218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.624238 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.728179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.728258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.728277 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.728311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.728331 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.832470 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.832571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.832599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.832643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.832667 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.935307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.935348 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.935362 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.935385 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:27 crc kubenswrapper[4772]: I1122 10:39:27.935400 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:27Z","lastTransitionTime":"2025-11-22T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.039443 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.039493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.039505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.039548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.039560 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.143299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.143366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.143389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.143421 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.143443 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.246580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.246660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.246675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.246698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.246717 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.350076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.350150 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.350167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.350195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.350216 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.413249 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.413380 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:28 crc kubenswrapper[4772]: E1122 10:39:28.413450 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:28 crc kubenswrapper[4772]: E1122 10:39:28.413695 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.453530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.453595 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.453613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.453641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.453660 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.556739 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.556823 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.556842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.556872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.556897 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.660316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.660389 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.660415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.660444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.660538 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.764439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.764507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.764524 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.764550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.764567 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.868712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.868797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.868813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.868839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.868856 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.972311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.972379 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.972392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.972413 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:28 crc kubenswrapper[4772]: I1122 10:39:28.972427 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:28Z","lastTransitionTime":"2025-11-22T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.075500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.075590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.075607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.075637 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.075661 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.179244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.179303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.179323 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.179350 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.179374 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.282253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.282321 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.282343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.282367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.282388 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.385712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.385793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.385813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.385830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.385841 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.413107 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.413344 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:29 crc kubenswrapper[4772]: E1122 10:39:29.413436 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:29 crc kubenswrapper[4772]: E1122 10:39:29.413541 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.489286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.489343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.489359 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.489384 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.489402 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.593113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.593177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.593193 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.593225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.593243 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.696480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.696533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.696552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.696579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.696598 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.799462 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.799817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.799957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.800128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.800260 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.903531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.903597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.903615 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.903642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:29 crc kubenswrapper[4772]: I1122 10:39:29.903663 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:29Z","lastTransitionTime":"2025-11-22T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.008315 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.008945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.009142 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.009313 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.009459 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.112878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.113356 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.113505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.113730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.113922 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.218471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.218560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.218582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.218619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.218645 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.322472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.322537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.322556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.322586 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.322607 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.413431 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.413526 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.413688 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.413939 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.425999 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.426038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.426074 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.426095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.426109 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.530243 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.530299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.530312 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.530334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.530347 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.633703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.634167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.634322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.634482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.634619 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.723964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.724097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.724126 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.724164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.724195 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.747290 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:30Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.763826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.763914 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.763937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.763973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.763999 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.786688 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:30Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.793172 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.793241 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.793264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.793297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.793322 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.812228 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:30Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.818463 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.818520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.818539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.818567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.818587 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.837859 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:30Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.843810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.843883 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.843893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.843913 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.843925 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.868470 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:30Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:30 crc kubenswrapper[4772]: E1122 10:39:30.868737 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.871398 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.871469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.871489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.871520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.871538 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.975858 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.975938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.975963 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.975996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:30 crc kubenswrapper[4772]: I1122 10:39:30.976020 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:30Z","lastTransitionTime":"2025-11-22T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.079815 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.079908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.079933 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.079981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.080013 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.183966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.184108 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.184136 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.184165 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.184183 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.287725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.287791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.287812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.287842 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.287861 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.390683 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.391035 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.391188 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.391310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.391405 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.412500 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.412713 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:31 crc kubenswrapper[4772]: E1122 10:39:31.412981 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:31 crc kubenswrapper[4772]: E1122 10:39:31.413300 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.432515 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.453013 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.478154 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.494196 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.494836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.495094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.495257 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.495300 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.495763 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.519405 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.541286 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.565643 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.588202 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.599118 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.599342 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.599468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.599579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.599680 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.616349 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.637469 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.663284 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.682772 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.703112 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.707154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.707305 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.707427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.707512 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.708819 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.733836 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.751156 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.771917 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.792753 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6478bd29-b5dd-47d1-98bb-5c27a05ebe1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77dd244ddad5f9ae9f977382b910123d1cfd80687600c51b036a8090eb14551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.811479 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.811585 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.811606 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.811638 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.811663 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.815039 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:31Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.915299 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.915355 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.915372 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.915395 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:31 crc kubenswrapper[4772]: I1122 10:39:31.915411 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:31Z","lastTransitionTime":"2025-11-22T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.018490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.018535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.018546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.018567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.018581 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.122012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.122120 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.122131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.122155 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.122171 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.226119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.226179 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.226195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.226225 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.226244 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.329712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.329771 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.329787 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.329832 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.329851 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.413645 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.413756 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:32 crc kubenswrapper[4772]: E1122 10:39:32.413831 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:32 crc kubenswrapper[4772]: E1122 10:39:32.414123 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.433907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.433998 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.434019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.434096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.434118 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.537770 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.537835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.537853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.537877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.537895 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.641153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.641246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.641271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.641307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.641335 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.744970 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.745111 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.745137 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.745185 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.745212 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.849795 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.849862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.849882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.849911 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.849934 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.953370 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.953442 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.953474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.953506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:32 crc kubenswrapper[4772]: I1122 10:39:32.953530 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:32Z","lastTransitionTime":"2025-11-22T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.056749 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.056816 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.056834 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.056863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.056885 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.161307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.161417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.161435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.161465 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.161486 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.265174 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.265265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.265284 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.265828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.265890 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.289792 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.290123 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.290042284 +0000 UTC m=+157.529486818 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.370974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.371012 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.371022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.371039 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.371071 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.391871 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.391954 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.391982 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.392031 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392212 4772 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392275 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.392254972 +0000 UTC m=+157.631699476 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392358 4772 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392495 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.392458857 +0000 UTC m=+157.631903391 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392536 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392585 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392657 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392691 4772 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392595 4772 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392786 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.392760465 +0000 UTC m=+157.632205009 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392797 4772 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.392902 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.392869508 +0000 UTC m=+157.632314042 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.413537 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.413632 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.413781 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:33 crc kubenswrapper[4772]: E1122 10:39:33.413942 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.474374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.474498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.474538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.474574 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.474606 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.577867 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.577938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.577962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.577993 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.578015 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.681342 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.681694 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.681784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.681873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.681962 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.784593 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.784680 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.784704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.784735 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.784762 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.888792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.888844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.888856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.888873 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.888882 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.991324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.991372 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.991384 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.991401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:33 crc kubenswrapper[4772]: I1122 10:39:33.991413 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:33Z","lastTransitionTime":"2025-11-22T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.094908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.094951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.094964 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.094983 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.094996 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.198013 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.198378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.198440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.198507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.198582 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.301452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.301490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.301501 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.301516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.301530 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.404942 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.404999 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.405018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.405040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.405095 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.413493 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.413665 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:34 crc kubenswrapper[4772]: E1122 10:39:34.413817 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:34 crc kubenswrapper[4772]: E1122 10:39:34.413960 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.507607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.507679 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.507701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.507729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.507752 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.611233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.611310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.611328 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.611366 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.611382 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.714804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.714884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.714901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.714930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.714950 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.818961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.819096 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.819121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.819202 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.819227 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.923521 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.924133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.924317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.924483 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:34 crc kubenswrapper[4772]: I1122 10:39:34.924623 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:34Z","lastTransitionTime":"2025-11-22T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.029106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.029150 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.029161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.029181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.029190 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.132500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.132553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.132567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.132587 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.132599 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.235435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.235474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.235482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.235500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.235509 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.338498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.338578 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.338592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.338610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.338621 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.412886 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.413030 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:35 crc kubenswrapper[4772]: E1122 10:39:35.413102 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:35 crc kubenswrapper[4772]: E1122 10:39:35.413316 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.414031 4772 scope.go:117] "RemoveContainer" containerID="a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.442391 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.442449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.442459 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.442477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.442487 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.545125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.545184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.545198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.545223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.545240 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.648432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.648509 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.648539 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.648579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.648607 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.751878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.751930 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.751941 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.751962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.751973 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.859545 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.859600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.859619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.859640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.859657 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.963316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.963359 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.963368 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.963383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.963393 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:35Z","lastTransitionTime":"2025-11-22T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.963883 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/2.log" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.968695 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.969298 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:39:35 crc kubenswrapper[4772]: I1122 10:39:35.988513 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:35Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.043762 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.065909 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.066761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.066804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.066813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.066829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.066840 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.084950 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.142366 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.161078 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.170306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.170367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.170383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.170408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.170425 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.180755 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.193383 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.209015 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.222383 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.237327 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.251031 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.264414 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.273433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.273478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.273492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.273515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.273534 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.282853 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.297426 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.312437 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6478bd29-b5dd-47d1-98bb-5c27a05ebe1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77dd244ddad5f9ae9f977382b910123d1cfd80687600c51b036a8090eb14551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.325673 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.340624 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.375747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.375797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.375808 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.375829 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.375842 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.413271 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.413303 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:36 crc kubenswrapper[4772]: E1122 10:39:36.413598 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:36 crc kubenswrapper[4772]: E1122 10:39:36.413738 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.479275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.479378 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.479430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.479476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.479504 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.581435 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.581468 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.581476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.581490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.581499 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.683762 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.683804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.683814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.683830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.683840 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.786404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.786446 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.786471 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.786485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.786495 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.889357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.889433 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.889453 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.889487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:36 crc kubenswrapper[4772]: I1122 10:39:36.889503 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:36Z","lastTransitionTime":"2025-11-22T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.105548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.105828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.105836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.105850 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.105861 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.108884 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/3.log" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.109701 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/2.log" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.112474 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" exitCode=1 Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.112509 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.112539 4772 scope.go:117] "RemoveContainer" containerID="a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.113890 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:39:37 crc kubenswrapper[4772]: E1122 10:39:37.114308 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.137986 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6478bd29-b5dd-47d1-98bb-5c27a05ebe1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77dd244ddad5f9ae9f977382b910123d1cfd80687600c51b036a8090eb14551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.151339 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.163345 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.177422 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.195996 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d7262920f7a31f254be2b721019ad96c62da015bcc54afa32884321e1bba1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:07Z\\\",\\\"message\\\":\\\"*v1.Namespace event handler 1 for removal\\\\nI1122 10:39:07.442002 6580 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 10:39:07.442031 6580 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 10:39:07.442073 6580 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 10:39:07.442069 6580 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 10:39:07.442090 6580 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 10:39:07.442093 6580 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1122 10:39:07.442097 6580 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1122 10:39:07.442120 6580 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 10:39:07.442140 6580 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 10:39:07.442142 6580 factory.go:656] Stopping watch factory\\\\nI1122 10:39:07.442151 6580 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 10:39:07.442163 6580 ovnkube.go:599] Stopped ovnkube\\\\nI1122 10:39:07.442164 6580 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 10:39:07.442160 6580 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 10:39:07.442210 6580 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 10:39:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:36Z\\\",\\\"message\\\":\\\" certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z]\\\\nI1122 10:39:36.530678 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530685 6882 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-mbpk7 in node crc\\\\nI1122 10:39:36.530261 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1122 10:39:36.530693 6882 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-mbpk7 after 0 failed attempt(s)\\\\nI1122 10:39:36.530701 6882 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530671 6882 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.207280 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.211718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.211764 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.211777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.211803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.211816 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.220978 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.231994 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.246174 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.261304 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.276705 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.289680 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.303577 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.315594 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.315659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.315675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.315701 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.315717 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.321334 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.333998 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.346607 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.362934 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.378386 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:37Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.412641 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:37 crc kubenswrapper[4772]: E1122 10:39:37.412854 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.412641 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:37 crc kubenswrapper[4772]: E1122 10:39:37.413167 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.418347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.418401 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.418415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.418432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.418449 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.521744 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.521792 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.521807 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.521824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.521838 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.625826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.625902 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.625924 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.625955 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.625978 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.728454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.728489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.728508 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.728532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.728544 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.832591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.832658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.832676 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.832702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.832724 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.936633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.936703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.936716 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.936738 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:37 crc kubenswrapper[4772]: I1122 10:39:37.936751 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:37Z","lastTransitionTime":"2025-11-22T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.040579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.040621 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.040631 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.040651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.040668 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.120019 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/3.log" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.125289 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:39:38 crc kubenswrapper[4772]: E1122 10:39:38.125608 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.143429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.143482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.143493 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.143512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.143524 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.147638 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.165672 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.180183 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.196645 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.214925 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.229400 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.243388 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.245770 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.245895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.246015 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.246125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.246209 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.263886 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.279364 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.296241 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.313806 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.325038 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6478bd29-b5dd-47d1-98bb-5c27a05ebe1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77dd244ddad5f9ae9f977382b910123d1cfd80687600c51b036a8090eb14551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.342512 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.349709 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.349907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.350022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.350140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.350213 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.355228 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.377682 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:36Z\\\",\\\"message\\\":\\\" certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z]\\\\nI1122 10:39:36.530678 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530685 6882 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-mbpk7 in node crc\\\\nI1122 10:39:36.530261 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1122 10:39:36.530693 6882 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-mbpk7 after 0 failed attempt(s)\\\\nI1122 10:39:36.530701 6882 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530671 6882 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.386904 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.400366 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.413313 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.413663 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:38Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.413396 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:38 crc kubenswrapper[4772]: E1122 10:39:38.413920 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:38 crc kubenswrapper[4772]: E1122 10:39:38.414200 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.453271 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.453599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.453665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.453769 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.453841 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.556851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.556906 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.556918 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.556938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.556950 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.660399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.660490 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.660516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.660552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.660577 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.763853 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.763909 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.763922 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.763943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.763956 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.866296 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.866330 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.866339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.866354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.866365 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.968855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.969233 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.969452 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.969579 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:38 crc kubenswrapper[4772]: I1122 10:39:38.969685 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:38Z","lastTransitionTime":"2025-11-22T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.072662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.072920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.073077 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.073184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.073286 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.175767 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.175838 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.175862 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.175892 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.175913 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.279730 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.279773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.279784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.279802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.279814 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.382450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.382512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.382531 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.382561 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.382577 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.413107 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:39 crc kubenswrapper[4772]: E1122 10:39:39.413296 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.413106 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:39 crc kubenswrapper[4772]: E1122 10:39:39.413526 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.485855 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.485939 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.485962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.485994 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.486017 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.589783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.589832 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.589846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.589865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.589925 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.693530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.693609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.693628 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.693660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.693682 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.797894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.797966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.797984 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.798014 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.798035 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.901749 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.901811 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.901872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.901905 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:39 crc kubenswrapper[4772]: I1122 10:39:39.901925 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:39Z","lastTransitionTime":"2025-11-22T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.005854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.005991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.006021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.006103 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.006133 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.108981 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.109072 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.109094 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.109123 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.109143 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.212966 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.213076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.213097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.213127 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.213155 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.318474 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.318604 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.318624 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.318653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.318707 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.413274 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.413417 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:40 crc kubenswrapper[4772]: E1122 10:39:40.413458 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:40 crc kubenswrapper[4772]: E1122 10:39:40.413719 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.423377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.423437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.423457 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.423494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.423512 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.526889 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.526938 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.526951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.526972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.526986 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.630620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.630702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.630726 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.630757 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.630779 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.734660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.734727 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.734749 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.734783 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.734806 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.839191 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.839265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.839285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.839316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.839337 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.942735 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.943350 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.943659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.943928 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:40 crc kubenswrapper[4772]: I1122 10:39:40.944199 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:40Z","lastTransitionTime":"2025-11-22T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.048246 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.048670 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.048878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.049119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.049379 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.152624 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.153033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.153308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.153477 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.153678 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.164707 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.164781 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.164802 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.164834 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.164869 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.187227 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.194685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.194793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.194817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.194891 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.194917 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.219470 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.225796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.225996 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.226153 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.226267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.226338 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.246804 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.252331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.252419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.252443 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.252473 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.252491 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.275299 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.279890 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.280010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.280125 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.280221 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.280354 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.300491 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.300753 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.304177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.304383 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.304494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.304618 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.304733 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.408427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.408489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.408512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.408540 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.408560 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.413169 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.413370 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.413452 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:41 crc kubenswrapper[4772]: E1122 10:39:41.413705 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.430378 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6478bd29-b5dd-47d1-98bb-5c27a05ebe1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77dd244ddad5f9ae9f977382b910123d1cfd80687600c51b036a8090eb14551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.450138 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.473385 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.496126 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.512279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.512347 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.512363 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.512390 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.512409 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.531690 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:36Z\\\",\\\"message\\\":\\\" certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z]\\\\nI1122 10:39:36.530678 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530685 6882 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-mbpk7 in node crc\\\\nI1122 10:39:36.530261 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1122 10:39:36.530693 6882 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-mbpk7 after 0 failed attempt(s)\\\\nI1122 10:39:36.530701 6882 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530671 6882 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.552287 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.572382 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.600238 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.614954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.614995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.615004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.615022 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.615033 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.626373 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.646657 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.668085 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.686854 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.709686 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.718619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.718675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.718693 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.718723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.718743 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.727036 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.748478 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.765892 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.790539 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.813136 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:41Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.827506 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.827584 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.827611 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.827652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.827703 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.931750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.931859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.931912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.931943 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:41 crc kubenswrapper[4772]: I1122 10:39:41.932018 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:41Z","lastTransitionTime":"2025-11-22T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.034845 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.035275 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.035288 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.035305 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.035316 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.138371 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.138472 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.138497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.138530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.138554 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.241427 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.241487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.241511 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.241536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.241550 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.344793 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.344846 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.344857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.344878 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.344893 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.412724 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:42 crc kubenswrapper[4772]: E1122 10:39:42.412921 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.413003 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:42 crc kubenswrapper[4772]: E1122 10:39:42.413292 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.431989 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.448360 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.448404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.448417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.448438 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.448451 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.551894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.551954 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.551975 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.552004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.552025 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.655436 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.655507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.655517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.655535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.655546 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.759039 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.759109 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.759122 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.759141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.759153 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.862116 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.862168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.862177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.862198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.862208 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.965530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.965572 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.965582 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.965601 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:42 crc kubenswrapper[4772]: I1122 10:39:42.965614 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:42Z","lastTransitionTime":"2025-11-22T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.068662 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.068717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.068728 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.068746 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.068761 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.171374 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.171429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.171440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.171459 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.171472 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.274110 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.274156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.274168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.274187 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.274199 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.378927 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.378974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.378988 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.379011 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.379027 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.413510 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.413510 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:43 crc kubenswrapper[4772]: E1122 10:39:43.413714 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:43 crc kubenswrapper[4772]: E1122 10:39:43.413779 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.482171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.482227 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.482238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.482263 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.482279 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.585607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.585673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.585699 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.585721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.585734 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.688970 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.689080 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.689101 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.689128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.689150 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.792439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.792512 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.792532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.792564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.792581 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.895339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.895414 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.895434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.895466 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:43 crc kubenswrapper[4772]: I1122 10:39:43.895485 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:43Z","lastTransitionTime":"2025-11-22T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.001556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.001643 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.001680 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.001719 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.001744 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.108254 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.108343 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.108381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.108417 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.108445 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.212910 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.212995 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.213024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.213095 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.213126 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.317629 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.317698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.317711 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.317751 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.317768 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.413218 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.413261 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:44 crc kubenswrapper[4772]: E1122 10:39:44.413438 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:44 crc kubenswrapper[4772]: E1122 10:39:44.413511 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.420963 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.421147 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.421167 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.421195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.421217 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.524140 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.524189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.524204 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.524231 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.524248 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.627533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.627610 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.627622 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.627640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.627651 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.730960 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.731027 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.731038 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.731085 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.731103 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.834367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.834489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.834517 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.834549 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.834570 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.938434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.938502 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.938520 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.938548 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:44 crc kubenswrapper[4772]: I1122 10:39:44.938605 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:44Z","lastTransitionTime":"2025-11-22T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.041543 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.041645 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.041673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.041703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.041723 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.145761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.145824 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.145835 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.145857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.145872 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.249326 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.249402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.249420 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.249446 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.249462 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.353193 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.353286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.353306 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.353336 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.353366 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.413033 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.413114 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:45 crc kubenswrapper[4772]: E1122 10:39:45.413195 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:45 crc kubenswrapper[4772]: E1122 10:39:45.413459 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.455903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.455980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.456000 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.456033 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.456096 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.558948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.559009 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.559020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.559064 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.559078 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.661868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.661937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.661950 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.661972 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.661986 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.764404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.764460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.764478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.764507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.764529 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.867851 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.867924 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.867945 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.867974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.867994 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.971682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.971725 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.971743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.971763 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:45 crc kubenswrapper[4772]: I1122 10:39:45.971775 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:45Z","lastTransitionTime":"2025-11-22T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.074827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.074935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.074957 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.074987 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.075007 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.178325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.178370 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.178380 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.178398 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.178415 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.281308 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.281429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.281494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.281524 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.281545 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.385462 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.385526 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.385544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.385573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.385592 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.413109 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.413110 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:46 crc kubenswrapper[4772]: E1122 10:39:46.413754 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:46 crc kubenswrapper[4772]: E1122 10:39:46.413842 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.488722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.488791 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.488803 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.488826 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.488839 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.591869 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.591961 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.591980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.592009 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.592033 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.695859 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.695944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.695962 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.696001 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.696041 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.798498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.798559 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.798573 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.798597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.798611 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.901658 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.901737 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.901758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.901789 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:46 crc kubenswrapper[4772]: I1122 10:39:46.901812 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:46Z","lastTransitionTime":"2025-11-22T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.005375 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.005449 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.005469 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.005497 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.005520 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.110042 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.110310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.110372 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.110437 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.110452 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.213937 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.214026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.214070 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.214097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.214118 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.317106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.317161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.317175 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.317198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.317216 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.413176 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.413223 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:47 crc kubenswrapper[4772]: E1122 10:39:47.413425 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:47 crc kubenswrapper[4772]: E1122 10:39:47.413822 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.420599 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.420648 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.420672 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.420704 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.420729 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.524003 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.524111 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.524135 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.524168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.524192 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.627849 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.628402 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.628504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.628592 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.628676 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.732642 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.733230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.733475 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.733691 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.733892 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.837018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.837078 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.837086 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.837105 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.837117 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.941019 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.941113 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.941131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.941156 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:47 crc kubenswrapper[4772]: I1122 10:39:47.941175 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:47Z","lastTransitionTime":"2025-11-22T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.044439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.044507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.044565 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.044598 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.044649 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.146991 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.147036 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.147064 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.147080 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.147093 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.250392 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.250487 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.250507 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.250536 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.250557 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.353758 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.353833 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.353852 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.353884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.353909 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.413852 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.413858 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:48 crc kubenswrapper[4772]: E1122 10:39:48.414272 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:48 crc kubenswrapper[4772]: E1122 10:39:48.414426 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.456979 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.457064 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.457081 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.457106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.457125 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.549433 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:48 crc kubenswrapper[4772]: E1122 10:39:48.549667 4772 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:39:48 crc kubenswrapper[4772]: E1122 10:39:48.549779 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs podName:c89edce7-fac8-4954-b2e9-420f0f2de6a8 nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.549749672 +0000 UTC m=+172.789194206 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs") pod "network-metrics-daemon-fvsrl" (UID: "c89edce7-fac8-4954-b2e9-420f0f2de6a8") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.560893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.560973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.560992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.561021 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.561040 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.664182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.664230 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.664240 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.664260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.664274 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.768200 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.768266 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.768287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.768317 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.768339 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.871499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.871556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.871567 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.871594 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.871606 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.975325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.975431 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.975460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.975499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:48 crc kubenswrapper[4772]: I1122 10:39:48.975543 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:48Z","lastTransitionTime":"2025-11-22T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.078462 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.078499 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.078510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.078524 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.078533 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.181419 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.181485 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.181504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.181530 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.181550 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.285331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.285387 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.285404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.285429 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.285448 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.389018 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.389154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.389177 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.389211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.389251 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.413369 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.413369 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:49 crc kubenswrapper[4772]: E1122 10:39:49.413863 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:49 crc kubenswrapper[4772]: E1122 10:39:49.413977 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.414155 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:39:49 crc kubenswrapper[4772]: E1122 10:39:49.414368 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.492484 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.492538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.492553 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.492576 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.492592 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.596454 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.596492 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.596505 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.596523 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.596535 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.700614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.700720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.700741 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.700765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.700789 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.803444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.803486 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.803498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.803515 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.803529 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.907121 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.907192 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.907211 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.907244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:49 crc kubenswrapper[4772]: I1122 10:39:49.907270 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:49Z","lastTransitionTime":"2025-11-22T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.010224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.010285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.010303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.010327 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.010348 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.113441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.113488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.113500 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.113522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.113534 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.216641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.216688 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.216700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.216721 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.216735 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.319987 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.320040 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.320093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.320111 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.320122 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.412802 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:50 crc kubenswrapper[4772]: E1122 10:39:50.413314 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.414726 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:50 crc kubenswrapper[4772]: E1122 10:39:50.414962 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.423168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.423226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.423244 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.423267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.423287 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.526844 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.526904 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.526923 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.526951 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.526974 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.630357 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.630430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.630441 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.630476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.630495 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.734201 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.734272 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.734291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.734321 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.734341 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.838575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.838633 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.838650 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.838673 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.838691 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.941813 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.941989 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.942023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.942130 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:50 crc kubenswrapper[4772]: I1122 10:39:50.942232 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:50Z","lastTransitionTime":"2025-11-22T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.045268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.045354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.045382 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.045407 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.045422 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.148388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.148444 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.148458 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.148476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.148488 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.251476 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.251920 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.252010 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.252133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.252236 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.355268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.355740 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.355828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.355929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.356022 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.412981 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.413308 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.413481 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.413790 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.434673 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6478bd29-b5dd-47d1-98bb-5c27a05ebe1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b77dd244ddad5f9ae9f977382b910123d1cfd80687600c51b036a8090eb14551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://178c7dfd1ee6d37ff147a01b6b673a6d30d4604bd9ec4484437888bf3ef5e689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.452875 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e026ddd84cdd62f9fa89b0c173fa464361017d3603d80e2b88a1b04af13487c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.458093 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.458131 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.458143 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.458163 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.458175 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.470948 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd0565ed-eb43-43a3-974c-45a23e9615a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a731bf43107a62df11bd3a033a52f25d32911218822f9c5ac1f3d9b6f718ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a2e02f93671de603bc005b4f7561a62bb7681d678a3348b837656ae2af54f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98gmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-n8qfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.512724 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f540f95-ea3a-4088-a940-e4f16543378e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb1b92ae002effbfc025d22c10082116ba7137912cd2ba09c0753defd5c50343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d9a7b382476d6b5d6e463560635a3871de005dbf974cba3094a3a758d042b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b438ca04c6fabe1485816b42146cb3187c37a93a06ce564cb7a50100e44288b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f8870ad28ab8562db15a8048e96cd5ec5b3eef59c69b5dcfadb89210723afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91944314de5fb702efd0ac83da330d5a2a494afc70ae98e4a0b45bd095d22281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ff2ca46da5da5de5605b38081b6b04f2104135e99bf5261f42492280d96fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd3ff2ca46da5da5de5605b38081b6b04f2104135e99bf5261f42492280d96fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82af55d2dab893ddab5c953bf4ee9c4491d79b9a683a7e6e983eed714972ff07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82af55d2dab893ddab5c953bf4ee9c4491d79b9a683a7e6e983eed714972ff07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad8c93bdef8b86bcb77687cd0ef089a953e6814981a7268418200198d5df941c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad8c93bdef8b86bcb77687cd0ef089a953e6814981a7268418200198d5df941c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.538945 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd84e05e-cfd6-46d5-bd23-30689addcd8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:36Z\\\",\\\"message\\\":\\\" certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:36Z is after 2025-08-24T17:21:41Z]\\\\nI1122 10:39:36.530678 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530685 6882 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-mbpk7 in node crc\\\\nI1122 10:39:36.530261 6882 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1122 10:39:36.530693 6882 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-mbpk7 after 0 failed attempt(s)\\\\nI1122 10:39:36.530701 6882 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-mbpk7\\\\nI1122 10:39:36.530671 6882 services_controller.go:451] Built service openshift-apiserver/api cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:39:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkcfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mfm49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.552873 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mbpk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e39748b-4fa5-4a70-8921-dc3dc814f124\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c334585be8f67849986e80c7a7ea777340e93f963ba58fa0fb7b36b16a73142b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6m5ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mbpk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.561228 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.561285 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.561297 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.561320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.561336 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.564331 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c89edce7-fac8-4954-b2e9-420f0f2de6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x76mr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fvsrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.583291 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.601181 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9557fb28-d21d-45a3-b7b9-170936e83f56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a77d132d925d70472ff5456bf4521c0612c319296dd1f3fa31c68d3c84fb347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce747c72c887bb7dff7503755d81d0e65305d2545f93dcb0d20bd6bf32b495\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://187f5633fd42f380d4976965809b538530acef067adc7e02203f95df532dbbfa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab2905e4976fc61f69872e9b37db10f5598fe3f96e0a124d22848215a7b5817\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da3ec012670266d907f1fdeee236d494f45f5a2f0df4981c258daf301648c216\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 10:38:29.060158 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 10:38:29.060179 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 10:38:29.060386 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\"\\\\nI1122 10:38:29.060380 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3558896242/tls.crt::/tmp/serving-cert-3558896242/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763807893\\\\\\\\\\\\\\\" (2025-11-22 10:38:12 +0000 UTC to 2025-12-22 10:38:13 +0000 UTC (now=2025-11-22 10:38:29.060343851 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060557 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763807909\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763807908\\\\\\\\\\\\\\\" (2025-11-22 09:38:28 +0000 UTC to 2026-11-22 09:38:28 +0000 UTC (now=2025-11-22 10:38:29.060522446 +0000 UTC))\\\\\\\"\\\\nI1122 10:38:29.060582 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 10:38:29.060604 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 10:38:29.060636 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 10:38:29.062066 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062334 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 10:38:29.062385 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 10:38:29.063702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42fc0afe421b18b7212836931c95487807ac0b32f6e87d92e847bd7826aedcd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61ad7a1404844a08322b8079c4dc634f3ddb35681edbb66f308cb9011696abf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.618098 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd883a9c-bd98-4ce0-9256-710c2311012f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab209a1a1db8448083e4994bbd6c236d67ddfd2ba6eff2bf3c05150f600ad698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f447b1a4882759c19fd69e9acddae280d63009c5fc7d21b368b470160c53250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecffb7596767673ba91dbbee96f3fa4b32109c5d5145176e86618860187077de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92229718d5b39cbd9473102e7e569d8370d38e945387ff3f48ee9f4077d8d1fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.634703 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e05750665701a1973e974ba759b2a0ef54a89b24d63d33a9fce59e9aa7a21ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.651863 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.664534 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.664613 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.664644 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.664667 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.664679 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.667125 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.680602 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.680742 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.681432 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.681467 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.681488 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.682410 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9qv88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"751566de-1913-4dd6-9054-21febc661c27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902be5c4bb0f9aaca4a21ca0af13b2dc29d682c30e1e2d281beb6e4e15b6aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jckg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9qv88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.696759 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.698852 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s4mvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d73fd58d-561a-4b16-9f9d-49ae966edb24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T10:39:25Z\\\",\\\"message\\\":\\\"2025-11-22T10:38:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee\\\\n2025-11-22T10:38:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f2369e6a-e365-4b69-af4f-a38ee6fb8bee to /host/opt/cni/bin/\\\\n2025-11-22T10:38:40Z [verbose] multus-daemon started\\\\n2025-11-22T10:38:40Z [verbose] Readiness Indicator file check\\\\n2025-11-22T10:39:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:39:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vk9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s4mvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.700980 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.701097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.701181 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.701260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.701331 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.714961 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"888813e4-14b2-4bbc-badf-3fd7c315a740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ade6ffb5c4bd3c19a2d85f21de1e0f198d6729b45df79233c8db3c73aff066f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c687d1a3e98c09917692169294f8549b0f1ddeddcc97c073da4d8e5c17e012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1452c164ce569dfa4665a70113fb965905d1974744637904d6bfba2e35446f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b331f41571928038bb597f1e94a67d24e726471c1a22082607dd26c11e8ea33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.718725 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.724552 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.724609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.724630 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.724660 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.724681 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.735328 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e11c7f86-73db-4015-9fe5-c0b5047c19a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901e96ef18a53b8231a138e235e8ed145d94f111dc515aaa5e5415c18535e457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11d7a31c58b763dd30aa10f250fa654756dbb10166064c3adf6d313cc13b066f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://533b405c883d98c14aca92e6a23bd1743dc0a76ed8256f2b892746195602fb03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c508fde56a767c063afbb65f6272695d739d7d0c594efcafd7465397309ef931\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8dabd0e99ffb72fe38680ba04e6dc2cc342f2b21368c0646ac51924fdca817b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfe1b37351de69494e5f42516d1363a1acd97912fd754ddca224b61b7020b0e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8442718fb97d911a608083bbbcee822108ea95c9896e42b0f37fa05c0b8d3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T10:38:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9vq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z6xtb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.743863 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.749282 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.749386 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.749413 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.749491 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.749558 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.751145 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2386c238-461f-4956-940f-ac3c26eb052e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b3e223067a57a7ae418e1de80dff3c7537e0506e040028c413225f25397f03d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T10:38:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wwshd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.765863 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.766411 4772 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T10:38:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3d235e676f9a66e16a76f30d940c652287b80507d2335b94aaad54d2aae071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2b9e216df73a0c6e89da9a01b70ca7393dea61d59aa4237584add3411d3362d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T10:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.771373 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.771456 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.771482 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.771513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.771538 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.791605 4772 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T10:39:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"902c1d4e-ffb4-4b1b-add7-8f7170ab4bc7\\\",\\\"systemUUID\\\":\\\"856a4d77-e4e0-4420-9e80-7c5223144311\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T10:39:51Z is after 2025-08-24T17:21:41Z" Nov 22 10:39:51 crc kubenswrapper[4772]: E1122 10:39:51.791963 4772 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.794184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.794267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.794290 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.794320 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.794340 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.897665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.898023 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.898128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.898206 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:51 crc kubenswrapper[4772]: I1122 10:39:51.898267 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:51Z","lastTransitionTime":"2025-11-22T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.000661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.000698 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.000712 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.000733 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.000745 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.112828 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.112940 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.112956 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.112978 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.112993 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.217901 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.218340 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.218434 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.218522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.218586 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.322364 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.322785 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.322973 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.323164 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.323299 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.413345 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.413345 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:52 crc kubenswrapper[4772]: E1122 10:39:52.413620 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:52 crc kubenswrapper[4772]: E1122 10:39:52.413663 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.427218 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.427250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.427264 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.427279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.427292 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.530812 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.530882 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.530894 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.530915 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.530929 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.634351 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.634430 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.634453 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.634489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.634510 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.738287 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.738360 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.738381 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.738408 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.738430 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.841626 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.841702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.841722 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.841752 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.841772 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.944974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.945062 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.945082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.945111 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:52 crc kubenswrapper[4772]: I1122 10:39:52.945133 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:52Z","lastTransitionTime":"2025-11-22T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.048504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.048557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.048575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.048600 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.048620 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.152522 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.152581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.152591 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.152614 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.152626 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.255723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.255782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.255798 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.255817 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.255830 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.359782 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.359872 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.359903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.359942 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.359971 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.413120 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.413167 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:53 crc kubenswrapper[4772]: E1122 10:39:53.413370 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:53 crc kubenswrapper[4772]: E1122 10:39:53.413568 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.463488 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.463541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.463557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.463583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.463598 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.566532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.566575 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.566590 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.566607 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.566618 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.669982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.670020 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.670031 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.670068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.670080 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.773260 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.773304 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.773316 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.773337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.773351 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.876652 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.876723 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.876740 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.876773 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.876792 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.980489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.980554 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.980571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.980608 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:53 crc kubenswrapper[4772]: I1122 10:39:53.980625 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:53Z","lastTransitionTime":"2025-11-22T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.084311 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.084400 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.084418 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.084451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.084473 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.187609 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.187669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.187682 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.187700 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.187711 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.291560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.291636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.291651 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.291675 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.291721 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.395222 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.395279 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.395291 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.395314 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.395330 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.412906 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.413251 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:54 crc kubenswrapper[4772]: E1122 10:39:54.413432 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:54 crc kubenswrapper[4772]: E1122 10:39:54.413828 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.498778 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.498827 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.498837 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.498856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.498868 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.603147 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.603195 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.603207 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.603226 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.603236 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.707161 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.707250 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.707273 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.707307 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.707331 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.815026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.815097 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.815115 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.815133 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.815146 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.918747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.918819 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.918836 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.918866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:54 crc kubenswrapper[4772]: I1122 10:39:54.918884 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:54Z","lastTransitionTime":"2025-11-22T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.020992 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.021332 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.021439 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.021541 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.021630 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.124237 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.124461 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.124563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.124641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.124713 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.227198 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.227293 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.227310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.227331 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.227348 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.330276 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.330322 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.330335 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.330354 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.330368 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.413425 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:55 crc kubenswrapper[4772]: E1122 10:39:55.413544 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.413425 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:55 crc kubenswrapper[4772]: E1122 10:39:55.413616 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.432560 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.432617 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.432627 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.432646 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.432660 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.536459 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.536533 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.536550 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.536580 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.536626 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.638620 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.638659 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.638669 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.638685 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.638694 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.740931 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.740974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.740982 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.740997 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.741008 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.843581 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.843625 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.843636 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.843653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.843663 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.946450 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.946489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.946498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.946516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:55 crc kubenswrapper[4772]: I1122 10:39:55.946525 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:55Z","lastTransitionTime":"2025-11-22T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.049184 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.049268 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.049290 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.049324 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.049347 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.152334 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.152377 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.152388 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.152404 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.152413 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.255171 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.255215 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.255224 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.255238 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.255248 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.358182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.358229 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.358242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.358258 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.358271 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.413132 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.413176 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:56 crc kubenswrapper[4772]: E1122 10:39:56.413256 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:56 crc kubenswrapper[4772]: E1122 10:39:56.413411 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.461242 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.461346 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.461367 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.461393 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.461412 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.564411 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.564489 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.564513 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.564546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.564569 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.667176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.667235 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.667253 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.667278 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.667296 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.770735 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.770810 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.770830 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.770856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.770873 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.874494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.874537 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.874546 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.874564 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.874576 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.977223 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.977286 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.977303 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.977337 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:56 crc kubenswrapper[4772]: I1122 10:39:56.977355 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:56Z","lastTransitionTime":"2025-11-22T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.079777 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.079839 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.079856 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.079884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.079904 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.182717 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.183024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.183145 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.183254 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.183362 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.286532 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.286583 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.286597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.286619 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.286636 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.389790 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.389840 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.389863 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.389892 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.389907 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.413307 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:57 crc kubenswrapper[4772]: E1122 10:39:57.413491 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.413673 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:57 crc kubenswrapper[4772]: E1122 10:39:57.413905 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.492949 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.492993 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.493002 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.493017 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.493028 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.595703 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.595743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.595756 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.595772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.595784 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.699563 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.699640 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.699681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.699720 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.699764 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.802421 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.802498 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.802535 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.802556 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.802569 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.905796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.905875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.905892 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.905912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:57 crc kubenswrapper[4772]: I1122 10:39:57.905925 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:57Z","lastTransitionTime":"2025-11-22T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.008784 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.008875 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.008897 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.008924 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.008941 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.113007 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.113071 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.113089 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.113107 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.113119 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.216804 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.216953 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.216967 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.216990 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.217007 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.320569 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.320627 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.320641 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.320665 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.320678 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.413766 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.413847 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:39:58 crc kubenswrapper[4772]: E1122 10:39:58.414010 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:39:58 crc kubenswrapper[4772]: E1122 10:39:58.414448 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.424820 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.424854 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.424865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.424884 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.424900 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.528671 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.528729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.528742 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.528765 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.528779 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.632657 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.632747 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.632772 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.632814 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.632841 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.736985 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.737037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.737082 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.737102 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.737118 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.840110 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.840154 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.840168 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.840189 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.840206 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.943399 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.943460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.943478 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.943504 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:58 crc kubenswrapper[4772]: I1122 10:39:58.943523 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:58Z","lastTransitionTime":"2025-11-22T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.045974 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.046041 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.046076 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.046100 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.046115 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.150024 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.150106 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.150119 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.150141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.150158 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.253797 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.253857 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.253871 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.253893 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.253907 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.357800 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.357864 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.357881 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.357907 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.357926 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.412891 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.413164 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:39:59 crc kubenswrapper[4772]: E1122 10:39:59.413508 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:39:59 crc kubenswrapper[4772]: E1122 10:39:59.413694 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.461004 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.461128 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.461147 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.461176 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.461196 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.564421 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.564849 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.565159 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.565387 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.565582 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.669411 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.669451 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.669460 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.669480 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.669491 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.773212 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.774141 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.774415 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.774448 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.774469 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.878440 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.878510 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.878538 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.878570 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.878591 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.990234 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.990310 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.990516 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.990544 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:39:59 crc kubenswrapper[4772]: I1122 10:39:59.990565 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:39:59Z","lastTransitionTime":"2025-11-22T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.093098 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.093562 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.093750 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.093895 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.094166 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.197877 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.198360 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.198653 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.198866 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.199076 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.302796 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.302865 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.302908 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.302944 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.302967 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.406265 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.406325 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.406339 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.406358 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.406369 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.413540 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:00 crc kubenswrapper[4772]: E1122 10:40:00.413651 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.413559 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:00 crc kubenswrapper[4772]: E1122 10:40:00.414310 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.510282 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.510742 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.510903 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.511084 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.511223 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.613868 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.613932 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.613948 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.613976 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.613991 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.716661 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.716702 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.716714 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.716734 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.716752 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.819929 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.819999 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.820026 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.820068 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.820083 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.922681 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.922718 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.922729 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.922743 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:00 crc kubenswrapper[4772]: I1122 10:40:00.922753 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:00Z","lastTransitionTime":"2025-11-22T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.025912 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.026422 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.026634 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.026775 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.026925 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:01Z","lastTransitionTime":"2025-11-22T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.130182 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.130257 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.130267 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.130283 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.130297 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:01Z","lastTransitionTime":"2025-11-22T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.233494 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.233557 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.233571 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.233597 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.233613 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:01Z","lastTransitionTime":"2025-11-22T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.336898 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.336986 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.337005 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.337037 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.337091 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:01Z","lastTransitionTime":"2025-11-22T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.412601 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:01 crc kubenswrapper[4772]: E1122 10:40:01.412817 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.413200 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:01 crc kubenswrapper[4772]: E1122 10:40:01.413392 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:01 crc kubenswrapper[4772]: E1122 10:40:01.437944 4772 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.498306 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.498281576 podStartE2EDuration="19.498281576s" podCreationTimestamp="2025-11-22 10:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.497334401 +0000 UTC m=+121.736778985" watchObservedRunningTime="2025-11-22 10:40:01.498281576 +0000 UTC m=+121.737726070" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.511673 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.511649396 podStartE2EDuration="34.511649396s" podCreationTimestamp="2025-11-22 10:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.511470501 +0000 UTC m=+121.750915005" watchObservedRunningTime="2025-11-22 10:40:01.511649396 +0000 UTC m=+121.751093890" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.544003 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-n8qfx" podStartSLOduration=90.543979163 podStartE2EDuration="1m30.543979163s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.543757377 +0000 UTC m=+121.783201911" watchObservedRunningTime="2025-11-22 10:40:01.543979163 +0000 UTC m=+121.783423657" Nov 22 10:40:01 crc kubenswrapper[4772]: E1122 10:40:01.576852 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.615028 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-mbpk7" podStartSLOduration=91.615000974 podStartE2EDuration="1m31.615000974s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.614508191 +0000 UTC m=+121.853952685" watchObservedRunningTime="2025-11-22 10:40:01.615000974 +0000 UTC m=+121.854445468" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.670293 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9qv88" podStartSLOduration=93.670260851 podStartE2EDuration="1m33.670260851s" podCreationTimestamp="2025-11-22 10:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.669205833 +0000 UTC m=+121.908650327" watchObservedRunningTime="2025-11-22 10:40:01.670260851 +0000 UTC m=+121.909705365" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.700206 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.700174415 podStartE2EDuration="1m26.700174415s" podCreationTimestamp="2025-11-22 10:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.699895578 +0000 UTC m=+121.939340082" watchObservedRunningTime="2025-11-22 10:40:01.700174415 +0000 UTC m=+121.939618909" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.700618 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-s4mvm" podStartSLOduration=91.700611786 podStartE2EDuration="1m31.700611786s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.684077413 +0000 UTC m=+121.923521907" watchObservedRunningTime="2025-11-22 10:40:01.700611786 +0000 UTC m=+121.940056280" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.737984 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.737954465 podStartE2EDuration="1m32.737954465s" podCreationTimestamp="2025-11-22 10:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.722169841 +0000 UTC m=+121.961614335" watchObservedRunningTime="2025-11-22 10:40:01.737954465 +0000 UTC m=+121.977398979" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.738281 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=71.738274273 podStartE2EDuration="1m11.738274273s" podCreationTimestamp="2025-11-22 10:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.737081432 +0000 UTC m=+121.976525926" watchObservedRunningTime="2025-11-22 10:40:01.738274273 +0000 UTC m=+121.977718777" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.765748 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podStartSLOduration=91.765719432 podStartE2EDuration="1m31.765719432s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.765029834 +0000 UTC m=+122.004474348" watchObservedRunningTime="2025-11-22 10:40:01.765719432 +0000 UTC m=+122.005163936" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.789105 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z6xtb" podStartSLOduration=91.789078454 podStartE2EDuration="1m31.789078454s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:01.788266123 +0000 UTC m=+122.027710617" watchObservedRunningTime="2025-11-22 10:40:01.789078454 +0000 UTC m=+122.028522958" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.960706 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.960748 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.960761 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.960781 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 10:40:01 crc kubenswrapper[4772]: I1122 10:40:01.960794 4772 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T10:40:01Z","lastTransitionTime":"2025-11-22T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.016639 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9"] Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.017715 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.021101 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.021314 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.021448 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.021795 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.117721 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4b75d35b-fcb3-44f2-90ce-638956ad29ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.117799 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b75d35b-fcb3-44f2-90ce-638956ad29ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.117885 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4b75d35b-fcb3-44f2-90ce-638956ad29ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.117919 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b75d35b-fcb3-44f2-90ce-638956ad29ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.117968 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b75d35b-fcb3-44f2-90ce-638956ad29ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.218908 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4b75d35b-fcb3-44f2-90ce-638956ad29ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.218994 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b75d35b-fcb3-44f2-90ce-638956ad29ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.219096 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4b75d35b-fcb3-44f2-90ce-638956ad29ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.219189 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4b75d35b-fcb3-44f2-90ce-638956ad29ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.219250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b75d35b-fcb3-44f2-90ce-638956ad29ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.219301 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b75d35b-fcb3-44f2-90ce-638956ad29ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.219310 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4b75d35b-fcb3-44f2-90ce-638956ad29ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.220681 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b75d35b-fcb3-44f2-90ce-638956ad29ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.229361 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b75d35b-fcb3-44f2-90ce-638956ad29ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.250307 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b75d35b-fcb3-44f2-90ce-638956ad29ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5qpv9\" (UID: \"4b75d35b-fcb3-44f2-90ce-638956ad29ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.338802 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.412732 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:02 crc kubenswrapper[4772]: I1122 10:40:02.412742 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:02 crc kubenswrapper[4772]: E1122 10:40:02.413344 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:02 crc kubenswrapper[4772]: E1122 10:40:02.413496 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:03 crc kubenswrapper[4772]: I1122 10:40:03.216463 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" event={"ID":"4b75d35b-fcb3-44f2-90ce-638956ad29ef","Type":"ContainerStarted","Data":"74e530f4518888ece3aa632b5714ecffabfd1f4b779e47f421e698375afb4aa5"} Nov 22 10:40:03 crc kubenswrapper[4772]: I1122 10:40:03.216522 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" event={"ID":"4b75d35b-fcb3-44f2-90ce-638956ad29ef","Type":"ContainerStarted","Data":"ecf069e6da04fea078213e89905928a47a9e008d741461fd1cdd526aca7f6da3"} Nov 22 10:40:03 crc kubenswrapper[4772]: I1122 10:40:03.229202 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5qpv9" podStartSLOduration=93.229183339 podStartE2EDuration="1m33.229183339s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:03.228624285 +0000 UTC m=+123.468068779" watchObservedRunningTime="2025-11-22 10:40:03.229183339 +0000 UTC m=+123.468627833" Nov 22 10:40:03 crc kubenswrapper[4772]: I1122 10:40:03.413181 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:03 crc kubenswrapper[4772]: I1122 10:40:03.413258 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:03 crc kubenswrapper[4772]: E1122 10:40:03.413338 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:03 crc kubenswrapper[4772]: E1122 10:40:03.413428 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:04 crc kubenswrapper[4772]: I1122 10:40:04.413167 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:04 crc kubenswrapper[4772]: I1122 10:40:04.413167 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:04 crc kubenswrapper[4772]: E1122 10:40:04.413836 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:04 crc kubenswrapper[4772]: E1122 10:40:04.413917 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:04 crc kubenswrapper[4772]: I1122 10:40:04.414308 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:40:04 crc kubenswrapper[4772]: E1122 10:40:04.414577 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mfm49_openshift-ovn-kubernetes(fd84e05e-cfd6-46d5-bd23-30689addcd8b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" Nov 22 10:40:05 crc kubenswrapper[4772]: I1122 10:40:05.413625 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:05 crc kubenswrapper[4772]: I1122 10:40:05.413646 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:05 crc kubenswrapper[4772]: E1122 10:40:05.413772 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:05 crc kubenswrapper[4772]: E1122 10:40:05.413954 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:06 crc kubenswrapper[4772]: I1122 10:40:06.413437 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:06 crc kubenswrapper[4772]: I1122 10:40:06.413437 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:06 crc kubenswrapper[4772]: E1122 10:40:06.413596 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:06 crc kubenswrapper[4772]: E1122 10:40:06.413665 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:06 crc kubenswrapper[4772]: E1122 10:40:06.578332 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 10:40:07 crc kubenswrapper[4772]: I1122 10:40:07.412786 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:07 crc kubenswrapper[4772]: I1122 10:40:07.412847 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:07 crc kubenswrapper[4772]: E1122 10:40:07.412953 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:07 crc kubenswrapper[4772]: E1122 10:40:07.413024 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:08 crc kubenswrapper[4772]: I1122 10:40:08.413442 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:08 crc kubenswrapper[4772]: I1122 10:40:08.413561 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:08 crc kubenswrapper[4772]: E1122 10:40:08.413676 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:08 crc kubenswrapper[4772]: E1122 10:40:08.413785 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:09 crc kubenswrapper[4772]: I1122 10:40:09.413203 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:09 crc kubenswrapper[4772]: E1122 10:40:09.413324 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:09 crc kubenswrapper[4772]: I1122 10:40:09.413203 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:09 crc kubenswrapper[4772]: E1122 10:40:09.413696 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:10 crc kubenswrapper[4772]: I1122 10:40:10.413096 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:10 crc kubenswrapper[4772]: E1122 10:40:10.413212 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:10 crc kubenswrapper[4772]: I1122 10:40:10.413096 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:10 crc kubenswrapper[4772]: E1122 10:40:10.413277 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:11 crc kubenswrapper[4772]: I1122 10:40:11.413305 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:11 crc kubenswrapper[4772]: E1122 10:40:11.413871 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:11 crc kubenswrapper[4772]: I1122 10:40:11.413975 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:11 crc kubenswrapper[4772]: E1122 10:40:11.414156 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:11 crc kubenswrapper[4772]: E1122 10:40:11.578893 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.249435 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/1.log" Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.250645 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/0.log" Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.250737 4772 generic.go:334] "Generic (PLEG): container finished" podID="d73fd58d-561a-4b16-9f9d-49ae966edb24" containerID="3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c" exitCode=1 Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.250795 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerDied","Data":"3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c"} Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.250860 4772 scope.go:117] "RemoveContainer" containerID="4ae06ffdc2c0f4cebcbf28f2997026db4d45bcd0c95776e5a1d94527503041d0" Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.251623 4772 scope.go:117] "RemoveContainer" containerID="3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c" Nov 22 10:40:12 crc kubenswrapper[4772]: E1122 10:40:12.251872 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-s4mvm_openshift-multus(d73fd58d-561a-4b16-9f9d-49ae966edb24)\"" pod="openshift-multus/multus-s4mvm" podUID="d73fd58d-561a-4b16-9f9d-49ae966edb24" Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.413029 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:12 crc kubenswrapper[4772]: I1122 10:40:12.413161 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:12 crc kubenswrapper[4772]: E1122 10:40:12.413357 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:12 crc kubenswrapper[4772]: E1122 10:40:12.413579 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:13 crc kubenswrapper[4772]: I1122 10:40:13.256854 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/1.log" Nov 22 10:40:13 crc kubenswrapper[4772]: I1122 10:40:13.413390 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:13 crc kubenswrapper[4772]: I1122 10:40:13.413427 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:13 crc kubenswrapper[4772]: E1122 10:40:13.413596 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:13 crc kubenswrapper[4772]: E1122 10:40:13.413693 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:14 crc kubenswrapper[4772]: I1122 10:40:14.413737 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:14 crc kubenswrapper[4772]: E1122 10:40:14.413994 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:14 crc kubenswrapper[4772]: I1122 10:40:14.418774 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:14 crc kubenswrapper[4772]: E1122 10:40:14.419994 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:15 crc kubenswrapper[4772]: I1122 10:40:15.412815 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:15 crc kubenswrapper[4772]: I1122 10:40:15.412850 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:15 crc kubenswrapper[4772]: E1122 10:40:15.413397 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:15 crc kubenswrapper[4772]: E1122 10:40:15.413509 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:16 crc kubenswrapper[4772]: I1122 10:40:16.412778 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:16 crc kubenswrapper[4772]: I1122 10:40:16.412983 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:16 crc kubenswrapper[4772]: E1122 10:40:16.413183 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:16 crc kubenswrapper[4772]: E1122 10:40:16.413365 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:16 crc kubenswrapper[4772]: E1122 10:40:16.579931 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 10:40:17 crc kubenswrapper[4772]: I1122 10:40:17.413193 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:17 crc kubenswrapper[4772]: I1122 10:40:17.413200 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:17 crc kubenswrapper[4772]: E1122 10:40:17.413381 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:17 crc kubenswrapper[4772]: E1122 10:40:17.413690 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:17 crc kubenswrapper[4772]: I1122 10:40:17.414029 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:40:18 crc kubenswrapper[4772]: I1122 10:40:18.270428 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fvsrl"] Nov 22 10:40:18 crc kubenswrapper[4772]: I1122 10:40:18.270537 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:18 crc kubenswrapper[4772]: E1122 10:40:18.270644 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:18 crc kubenswrapper[4772]: I1122 10:40:18.279383 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/3.log" Nov 22 10:40:18 crc kubenswrapper[4772]: I1122 10:40:18.282844 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerStarted","Data":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} Nov 22 10:40:18 crc kubenswrapper[4772]: I1122 10:40:18.283227 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:40:18 crc kubenswrapper[4772]: I1122 10:40:18.319925 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podStartSLOduration=107.31990564 podStartE2EDuration="1m47.31990564s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:18.31837047 +0000 UTC m=+138.557814964" watchObservedRunningTime="2025-11-22 10:40:18.31990564 +0000 UTC m=+138.559350134" Nov 22 10:40:18 crc kubenswrapper[4772]: I1122 10:40:18.413530 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:18 crc kubenswrapper[4772]: E1122 10:40:18.413675 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:19 crc kubenswrapper[4772]: I1122 10:40:19.413272 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:19 crc kubenswrapper[4772]: I1122 10:40:19.413380 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:19 crc kubenswrapper[4772]: E1122 10:40:19.413460 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:19 crc kubenswrapper[4772]: E1122 10:40:19.413567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:20 crc kubenswrapper[4772]: I1122 10:40:20.412870 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:20 crc kubenswrapper[4772]: I1122 10:40:20.413003 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:20 crc kubenswrapper[4772]: E1122 10:40:20.413100 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:20 crc kubenswrapper[4772]: E1122 10:40:20.413229 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:21 crc kubenswrapper[4772]: I1122 10:40:21.413286 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:21 crc kubenswrapper[4772]: I1122 10:40:21.413313 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:21 crc kubenswrapper[4772]: E1122 10:40:21.415149 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:21 crc kubenswrapper[4772]: E1122 10:40:21.415350 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:21 crc kubenswrapper[4772]: E1122 10:40:21.580557 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 10:40:22 crc kubenswrapper[4772]: I1122 10:40:22.413219 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:22 crc kubenswrapper[4772]: I1122 10:40:22.413372 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:22 crc kubenswrapper[4772]: E1122 10:40:22.413676 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:22 crc kubenswrapper[4772]: E1122 10:40:22.413858 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:23 crc kubenswrapper[4772]: I1122 10:40:23.413189 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:23 crc kubenswrapper[4772]: E1122 10:40:23.413333 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:23 crc kubenswrapper[4772]: I1122 10:40:23.413446 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:23 crc kubenswrapper[4772]: E1122 10:40:23.413639 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:24 crc kubenswrapper[4772]: I1122 10:40:24.414554 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:24 crc kubenswrapper[4772]: I1122 10:40:24.414602 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:24 crc kubenswrapper[4772]: E1122 10:40:24.414791 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:24 crc kubenswrapper[4772]: E1122 10:40:24.415215 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:25 crc kubenswrapper[4772]: I1122 10:40:25.413337 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:25 crc kubenswrapper[4772]: I1122 10:40:25.413337 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:25 crc kubenswrapper[4772]: E1122 10:40:25.413553 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:25 crc kubenswrapper[4772]: E1122 10:40:25.413741 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:25 crc kubenswrapper[4772]: I1122 10:40:25.415158 4772 scope.go:117] "RemoveContainer" containerID="3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c" Nov 22 10:40:26 crc kubenswrapper[4772]: I1122 10:40:26.325489 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/1.log" Nov 22 10:40:26 crc kubenswrapper[4772]: I1122 10:40:26.326104 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerStarted","Data":"dbaa283849426ce5e4b86e9417fa7dfe167d36ad15ac0db6dec5765c3414bc11"} Nov 22 10:40:26 crc kubenswrapper[4772]: I1122 10:40:26.413602 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:26 crc kubenswrapper[4772]: I1122 10:40:26.413640 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:26 crc kubenswrapper[4772]: E1122 10:40:26.413778 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:26 crc kubenswrapper[4772]: E1122 10:40:26.413876 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:26 crc kubenswrapper[4772]: E1122 10:40:26.582758 4772 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 10:40:27 crc kubenswrapper[4772]: I1122 10:40:27.413182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:27 crc kubenswrapper[4772]: E1122 10:40:27.413853 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:27 crc kubenswrapper[4772]: I1122 10:40:27.413325 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:27 crc kubenswrapper[4772]: E1122 10:40:27.415374 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:28 crc kubenswrapper[4772]: I1122 10:40:28.413341 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:28 crc kubenswrapper[4772]: I1122 10:40:28.413379 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:28 crc kubenswrapper[4772]: E1122 10:40:28.413536 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:28 crc kubenswrapper[4772]: E1122 10:40:28.413691 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:29 crc kubenswrapper[4772]: I1122 10:40:29.413479 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:29 crc kubenswrapper[4772]: I1122 10:40:29.413559 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:29 crc kubenswrapper[4772]: E1122 10:40:29.413672 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:29 crc kubenswrapper[4772]: E1122 10:40:29.413754 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:30 crc kubenswrapper[4772]: I1122 10:40:30.412902 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:30 crc kubenswrapper[4772]: E1122 10:40:30.413099 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 10:40:30 crc kubenswrapper[4772]: I1122 10:40:30.413252 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:30 crc kubenswrapper[4772]: E1122 10:40:30.413544 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fvsrl" podUID="c89edce7-fac8-4954-b2e9-420f0f2de6a8" Nov 22 10:40:31 crc kubenswrapper[4772]: I1122 10:40:31.413370 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:31 crc kubenswrapper[4772]: I1122 10:40:31.413666 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:31 crc kubenswrapper[4772]: E1122 10:40:31.415150 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 10:40:31 crc kubenswrapper[4772]: E1122 10:40:31.415282 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 10:40:31 crc kubenswrapper[4772]: I1122 10:40:31.963362 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.363935 4772 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.407326 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5zmps"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.407980 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.413366 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-56sqr"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.413946 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ld7g2"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.414119 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.414197 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.414206 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.414618 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.426845 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.439368 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.439486 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.440025 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.440336 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.440354 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.440749 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.440829 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.440985 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.443597 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.443686 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.444612 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sgkwc"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.445182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.445490 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.451941 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vvs55"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456034 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456254 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456306 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456309 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456426 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456545 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456594 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456682 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456718 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456835 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456844 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.457031 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.457064 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.457253 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.457318 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.456976 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.457681 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.458660 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-v2gm9"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.458900 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.460624 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.460795 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.460897 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.461797 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.462528 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.463930 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.464316 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ld9hg"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.464635 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.464670 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.464912 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-trusted-ca\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.464985 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-config\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.465025 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-serving-cert\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.465208 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2pg9\" (UniqueName: \"kubernetes.io/projected/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-kube-api-access-c2pg9\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.467861 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.468824 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.498110 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.498914 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.499567 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.500130 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.504652 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.504939 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.505877 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.506297 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.506467 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.506659 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.506829 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.506979 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.513426 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.513734 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.513926 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.514118 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.514276 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.514420 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.514918 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.515494 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.521007 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.522628 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.522873 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.523033 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.523238 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.523432 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.523658 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.523755 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.523952 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.524416 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.524658 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.524932 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.525488 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.525610 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.528216 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.528216 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gdmn"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.530860 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.534036 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.534257 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.534032 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.535550 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.541981 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.554647 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.556083 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.559122 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.559336 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.560343 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.560603 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.560769 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561007 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561263 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561016 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561550 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561554 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561776 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561735 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561993 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.561964 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.562158 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.562283 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.562457 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.562453 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.562565 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.562678 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.563134 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.565983 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.566228 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hlvq7"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.566998 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.567136 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.567755 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-config\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.567793 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-policies\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.567819 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7578c60c-84c2-4dd5-a6c5-576606438ede-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.567883 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-audit\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.567987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtwvh\" (UniqueName: \"kubernetes.io/projected/e5a02044-ee67-480e-9cc9-22cf07bc9388-kube-api-access-wtwvh\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568036 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-etcd-client\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568109 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568134 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-etcd-client\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568156 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568175 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62vrs\" (UniqueName: \"kubernetes.io/projected/bea9575a-d7c4-4aaa-bc01-eaee90317eea-kube-api-access-62vrs\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568202 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2pg9\" (UniqueName: \"kubernetes.io/projected/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-kube-api-access-c2pg9\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568222 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1411a454-22f4-4eef-828a-6a46c81c6c7e-auth-proxy-config\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568240 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-oauth-config\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568256 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-audit-policies\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568272 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-serving-cert\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568340 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht25z\" (UniqueName: \"kubernetes.io/projected/f529668b-54db-49e7-92cb-c3cf6b986dce-kube-api-access-ht25z\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568378 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0f99cb1-427f-4992-8c9b-15c285f13189-node-pullsecrets\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568396 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-dir\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568417 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-service-ca-bundle\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568442 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-config\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2gp\" (UniqueName: \"kubernetes.io/projected/e0f99cb1-427f-4992-8c9b-15c285f13189-kube-api-access-4k2gp\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568496 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568520 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568542 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6495dc-3c26-45e6-af62-a4957488ae51-serving-cert\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568565 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-etcd-serving-ca\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c2915a8-d452-4234-94a7-f1ec68c95e4a-images\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568608 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd4q2\" (UniqueName: \"kubernetes.io/projected/7578c60c-84c2-4dd5-a6c5-576606438ede-kube-api-access-pd4q2\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568629 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568648 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568668 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-oauth-serving-cert\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568686 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c2915a8-d452-4234-94a7-f1ec68c95e4a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568706 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6mh\" (UniqueName: \"kubernetes.io/projected/ccd507b9-7746-46ee-8bd4-1134cc290f67-kube-api-access-zr6mh\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568724 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f4a455f-fdda-46bf-bebf-67f1b83863c8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wbcgw\" (UID: \"6f4a455f-fdda-46bf-bebf-67f1b83863c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568740 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568757 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568772 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-trusted-ca-bundle\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568800 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2915a8-d452-4234-94a7-f1ec68c95e4a-config\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568816 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1411a454-22f4-4eef-828a-6a46c81c6c7e-machine-approver-tls\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568832 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbjl2\" (UniqueName: \"kubernetes.io/projected/1411a454-22f4-4eef-828a-6a46c81c6c7e-kube-api-access-hbjl2\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568849 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xszvl\" (UniqueName: \"kubernetes.io/projected/a8554d37-40ae-41ef-bed9-7c79b3f8083e-kube-api-access-xszvl\") pod \"downloads-7954f5f757-v2gm9\" (UID: \"a8554d37-40ae-41ef-bed9-7c79b3f8083e\") " pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568864 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcw59\" (UniqueName: \"kubernetes.io/projected/6f4a455f-fdda-46bf-bebf-67f1b83863c8-kube-api-access-pcw59\") pod \"cluster-samples-operator-665b6dd947-wbcgw\" (UID: \"6f4a455f-fdda-46bf-bebf-67f1b83863c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568880 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6d99\" (UniqueName: \"kubernetes.io/projected/614def41-0349-470c-afca-e5c335fa8834-kube-api-access-c6d99\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568899 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7578c60c-84c2-4dd5-a6c5-576606438ede-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568917 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-config\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568935 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-trusted-ca\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568950 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccd507b9-7746-46ee-8bd4-1134cc290f67-serving-cert\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-config\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.568985 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ccd507b9-7746-46ee-8bd4-1134cc290f67-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569003 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1411a454-22f4-4eef-828a-6a46c81c6c7e-config\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569020 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bea9575a-d7c4-4aaa-bc01-eaee90317eea-serving-cert\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-serving-cert\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569087 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcd8j\" (UniqueName: \"kubernetes.io/projected/ed063d5e-19cb-42cb-89fb-21b3b751f53e-kube-api-access-jcd8j\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569106 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-encryption-config\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569123 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569145 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpb4m\" (UniqueName: \"kubernetes.io/projected/8c2915a8-d452-4234-94a7-f1ec68c95e4a-kube-api-access-vpb4m\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569179 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-service-ca\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569197 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-client-ca\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569228 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-serving-cert\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569258 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce04b81-efe4-4be7-b020-6ea273596c53-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569278 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-image-import-ca\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569298 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2r7p\" (UniqueName: \"kubernetes.io/projected/7e6495dc-3c26-45e6-af62-a4957488ae51-kube-api-access-f2r7p\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-trusted-ca-bundle\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569339 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqk5f\" (UniqueName: \"kubernetes.io/projected/7ce04b81-efe4-4be7-b020-6ea273596c53-kube-api-access-bqk5f\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569358 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0f99cb1-427f-4992-8c9b-15c285f13189-audit-dir\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569376 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-serving-cert\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569395 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-console-config\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569413 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5a02044-ee67-480e-9cc9-22cf07bc9388-audit-dir\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569432 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-encryption-config\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569452 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.569473 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.570826 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571292 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7578c60c-84c2-4dd5-a6c5-576606438ede-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571453 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571546 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571590 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-config\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571700 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571978 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-trusted-ca\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.572072 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-config\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.575422 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed063d5e-19cb-42cb-89fb-21b3b751f53e-serving-cert\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.575541 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce04b81-efe4-4be7-b020-6ea273596c53-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.575628 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-client-ca\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571786 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571822 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.571846 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.573885 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.574534 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.576699 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.578312 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.578338 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.579920 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.580368 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-km77q"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.581243 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.581840 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7wspp"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.582335 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.583180 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.583514 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.583947 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.583980 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.584485 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.585403 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.585482 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.586086 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.589246 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-serving-cert\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.601454 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.603121 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.604906 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5zmps"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.607528 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wc24s"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.612750 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.612973 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.613460 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.614085 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.618505 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.618669 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.620376 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.621981 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.623798 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.625253 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.625885 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-56sqr"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.631019 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t8jgq"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.632097 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.634726 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t76z9"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.635326 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.635869 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.644132 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ld7g2"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.651557 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.651788 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.653582 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8z66s"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.653786 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.657553 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.657695 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.658820 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.659017 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.660020 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.660307 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.660917 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.661141 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.661996 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.662161 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.662786 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.662946 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.663462 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.663525 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.664122 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.666008 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.666694 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.666889 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.667193 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.668283 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.669259 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.670364 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.670486 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.672589 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sgkwc"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.674475 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpb4m\" (UniqueName: \"kubernetes.io/projected/8c2915a8-d452-4234-94a7-f1ec68c95e4a-kube-api-access-vpb4m\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676496 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-service-ca\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676523 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-client-ca\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-serving-cert\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676565 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce04b81-efe4-4be7-b020-6ea273596c53-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-image-import-ca\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2r7p\" (UniqueName: \"kubernetes.io/projected/7e6495dc-3c26-45e6-af62-a4957488ae51-kube-api-access-f2r7p\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676636 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-metrics-certs\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676655 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-trusted-ca-bundle\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676673 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqk5f\" (UniqueName: \"kubernetes.io/projected/7ce04b81-efe4-4be7-b020-6ea273596c53-kube-api-access-bqk5f\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676690 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-client\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676708 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0f99cb1-427f-4992-8c9b-15c285f13189-audit-dir\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-serving-cert\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676745 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-console-config\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676761 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5a02044-ee67-480e-9cc9-22cf07bc9388-audit-dir\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676777 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-default-certificate\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676799 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676818 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676837 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-encryption-config\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676855 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e1caf27-b2b5-4cdf-b500-38d461b637c2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.676874 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677065 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7578c60c-84c2-4dd5-a6c5-576606438ede-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677087 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbtml\" (UniqueName: \"kubernetes.io/projected/b1e29324-c165-4f14-8cf4-f8f62522a87e-kube-api-access-nbtml\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677148 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677170 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-config\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed063d5e-19cb-42cb-89fb-21b3b751f53e-serving-cert\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677247 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce04b81-efe4-4be7-b020-6ea273596c53-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-client-ca\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677286 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e29324-c165-4f14-8cf4-f8f62522a87e-serving-cert\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677302 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8787d05b-35f8-4862-9b4f-53e18d3b56ef-service-ca-bundle\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677320 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-config\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-policies\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677358 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7578c60c-84c2-4dd5-a6c5-576606438ede-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677382 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-audit\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677406 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtwvh\" (UniqueName: \"kubernetes.io/projected/e5a02044-ee67-480e-9cc9-22cf07bc9388-kube-api-access-wtwvh\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e1caf27-b2b5-4cdf-b500-38d461b637c2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677609 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677627 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-etcd-client\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677645 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-etcd-client\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677663 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677756 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e1caf27-b2b5-4cdf-b500-38d461b637c2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677783 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1411a454-22f4-4eef-828a-6a46c81c6c7e-auth-proxy-config\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677799 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-oauth-config\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677813 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-audit-policies\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677838 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-serving-cert\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677893 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62vrs\" (UniqueName: \"kubernetes.io/projected/bea9575a-d7c4-4aaa-bc01-eaee90317eea-kube-api-access-62vrs\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677919 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht25z\" (UniqueName: \"kubernetes.io/projected/f529668b-54db-49e7-92cb-c3cf6b986dce-kube-api-access-ht25z\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677938 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-dir\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677954 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-service-ca-bundle\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-config\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677986 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-ca\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.677984 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-image-import-ca\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678001 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v2gm9"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678003 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-service-ca\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678116 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0f99cb1-427f-4992-8c9b-15c285f13189-node-pullsecrets\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678170 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k2gp\" (UniqueName: \"kubernetes.io/projected/e0f99cb1-427f-4992-8c9b-15c285f13189-kube-api-access-4k2gp\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678210 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678236 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678261 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6495dc-3c26-45e6-af62-a4957488ae51-serving-cert\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678288 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-config\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678312 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-stats-auth\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678343 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-etcd-serving-ca\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678366 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c2915a8-d452-4234-94a7-f1ec68c95e4a-images\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678388 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd4q2\" (UniqueName: \"kubernetes.io/projected/7578c60c-84c2-4dd5-a6c5-576606438ede-kube-api-access-pd4q2\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678418 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678450 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-oauth-serving-cert\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr6mh\" (UniqueName: \"kubernetes.io/projected/ccd507b9-7746-46ee-8bd4-1134cc290f67-kube-api-access-zr6mh\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678529 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f4a455f-fdda-46bf-bebf-67f1b83863c8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wbcgw\" (UID: \"6f4a455f-fdda-46bf-bebf-67f1b83863c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678553 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678576 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678598 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-trusted-ca-bundle\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678624 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c2915a8-d452-4234-94a7-f1ec68c95e4a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678645 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-client-ca\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678654 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wnx\" (UniqueName: \"kubernetes.io/projected/8787d05b-35f8-4862-9b4f-53e18d3b56ef-kube-api-access-j8wnx\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678710 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-trusted-ca-bundle\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678747 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1411a454-22f4-4eef-828a-6a46c81c6c7e-machine-approver-tls\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678811 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbjl2\" (UniqueName: \"kubernetes.io/projected/1411a454-22f4-4eef-828a-6a46c81c6c7e-kube-api-access-hbjl2\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2915a8-d452-4234-94a7-f1ec68c95e4a-config\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678879 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xszvl\" (UniqueName: \"kubernetes.io/projected/a8554d37-40ae-41ef-bed9-7c79b3f8083e-kube-api-access-xszvl\") pod \"downloads-7954f5f757-v2gm9\" (UID: \"a8554d37-40ae-41ef-bed9-7c79b3f8083e\") " pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678907 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcw59\" (UniqueName: \"kubernetes.io/projected/6f4a455f-fdda-46bf-bebf-67f1b83863c8-kube-api-access-pcw59\") pod \"cluster-samples-operator-665b6dd947-wbcgw\" (UID: \"6f4a455f-fdda-46bf-bebf-67f1b83863c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678946 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6d99\" (UniqueName: \"kubernetes.io/projected/614def41-0349-470c-afca-e5c335fa8834-kube-api-access-c6d99\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678975 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7578c60c-84c2-4dd5-a6c5-576606438ede-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679012 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-config\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679064 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccd507b9-7746-46ee-8bd4-1134cc290f67-serving-cert\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679097 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ccd507b9-7746-46ee-8bd4-1134cc290f67-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679123 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1411a454-22f4-4eef-828a-6a46c81c6c7e-config\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679146 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bea9575a-d7c4-4aaa-bc01-eaee90317eea-serving-cert\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679205 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcd8j\" (UniqueName: \"kubernetes.io/projected/ed063d5e-19cb-42cb-89fb-21b3b751f53e-kube-api-access-jcd8j\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679239 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-encryption-config\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.679265 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.680226 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-serving-cert\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.680384 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.680664 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0f99cb1-427f-4992-8c9b-15c285f13189-audit-dir\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.681446 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-console-config\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.681505 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5a02044-ee67-480e-9cc9-22cf07bc9388-audit-dir\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.678716 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e0f99cb1-427f-4992-8c9b-15c285f13189-node-pullsecrets\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.682068 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ld9hg"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.682423 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2915a8-d452-4234-94a7-f1ec68c95e4a-config\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.682843 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ccd507b9-7746-46ee-8bd4-1134cc290f67-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.683491 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jzjnj"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.683700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.684387 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-service-ca\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.684958 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.684965 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-config\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.685430 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.685602 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.685695 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-etcd-serving-ca\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.686089 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1411a454-22f4-4eef-828a-6a46c81c6c7e-config\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.686438 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c2915a8-d452-4234-94a7-f1ec68c95e4a-images\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.686475 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-audit\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.687317 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.687618 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-config\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.687777 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.687826 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f99cb1-427f-4992-8c9b-15c285f13189-config\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.687786 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-oauth-serving-cert\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.688104 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1411a454-22f4-4eef-828a-6a46c81c6c7e-machine-approver-tls\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.688500 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.688510 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-policies\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.688630 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.688861 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-client-ca\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.689069 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce04b81-efe4-4be7-b020-6ea273596c53-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.689202 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.689802 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7578c60c-84c2-4dd5-a6c5-576606438ede-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.689804 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-serving-cert\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.690462 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed063d5e-19cb-42cb-89fb-21b3b751f53e-serving-cert\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.690571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-dir\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.690801 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c2915a8-d452-4234-94a7-f1ec68c95e4a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.690992 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8z66s"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.691472 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.691496 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed063d5e-19cb-42cb-89fb-21b3b751f53e-service-ca-bundle\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.691900 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ce04b81-efe4-4be7-b020-6ea273596c53-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.692662 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-config\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.692769 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.692803 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f4a455f-fdda-46bf-bebf-67f1b83863c8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wbcgw\" (UID: \"6f4a455f-fdda-46bf-bebf-67f1b83863c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.692852 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5a02044-ee67-480e-9cc9-22cf07bc9388-audit-policies\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.693195 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccd507b9-7746-46ee-8bd4-1134cc290f67-serving-cert\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.693504 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.693660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.694102 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-trusted-ca-bundle\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.694155 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vvs55"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.695085 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-etcd-client\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.695793 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-etcd-client\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.695961 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-serving-cert\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.695967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.695965 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.697000 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7578c60c-84c2-4dd5-a6c5-576606438ede-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.696998 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e5a02044-ee67-480e-9cc9-22cf07bc9388-encryption-config\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.697660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.697951 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1411a454-22f4-4eef-828a-6a46c81c6c7e-auth-proxy-config\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.698117 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6495dc-3c26-45e6-af62-a4957488ae51-serving-cert\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.698588 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bea9575a-d7c4-4aaa-bc01-eaee90317eea-serving-cert\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.698733 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.701454 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.705886 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e0f99cb1-427f-4992-8c9b-15c285f13189-encryption-config\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.707890 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gdmn"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.710359 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.714804 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-oauth-config\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.714904 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.716371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.717061 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t8jgq"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.723600 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vhvzn"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.724585 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.724689 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.725145 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.727290 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.728836 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.729218 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2pg9\" (UniqueName: \"kubernetes.io/projected/9bb23622-7ae9-45fe-a07d-70c58b4b7f31-kube-api-access-c2pg9\") pod \"console-operator-58897d9998-5zmps\" (UID: \"9bb23622-7ae9-45fe-a07d-70c58b4b7f31\") " pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.730384 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.730852 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.731884 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.733950 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.734996 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-km77q"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.736015 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t76z9"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.737081 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.738963 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7wspp"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.740534 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.741802 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hlvq7"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.743114 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jzjnj"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.744385 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2g4sz"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.745207 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.745727 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9vsnk"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.746736 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.747256 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.748660 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.749965 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.750635 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.752080 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.753468 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.755167 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9vsnk"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.756421 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.756633 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.757869 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vhvzn"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.771121 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.779925 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-metrics-certs\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.779976 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-client\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780002 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-default-certificate\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780025 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e1caf27-b2b5-4cdf-b500-38d461b637c2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780066 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbtml\" (UniqueName: \"kubernetes.io/projected/b1e29324-c165-4f14-8cf4-f8f62522a87e-kube-api-access-nbtml\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780090 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e29324-c165-4f14-8cf4-f8f62522a87e-serving-cert\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780110 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8787d05b-35f8-4862-9b4f-53e18d3b56ef-service-ca-bundle\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e1caf27-b2b5-4cdf-b500-38d461b637c2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780214 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e1caf27-b2b5-4cdf-b500-38d461b637c2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-ca\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-service-ca\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780304 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-config\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780323 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-stats-auth\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.780365 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8wnx\" (UniqueName: \"kubernetes.io/projected/8787d05b-35f8-4862-9b4f-53e18d3b56ef-kube-api-access-j8wnx\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.791395 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.810462 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.811405 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-config\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.831456 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.852475 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.867203 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e29324-c165-4f14-8cf4-f8f62522a87e-serving-cert\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.872753 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.884075 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-client\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.890914 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.901295 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-ca\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.911238 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.921126 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1e29324-c165-4f14-8cf4-f8f62522a87e-etcd-service-ca\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.934280 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.951564 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.956624 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5zmps"] Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.974999 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 22 10:40:32 crc kubenswrapper[4772]: I1122 10:40:32.991174 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.005714 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e1caf27-b2b5-4cdf-b500-38d461b637c2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.010474 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.030943 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.041799 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e1caf27-b2b5-4cdf-b500-38d461b637c2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.071706 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.091401 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.101611 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8787d05b-35f8-4862-9b4f-53e18d3b56ef-service-ca-bundle\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.110758 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.125147 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-metrics-certs\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.132153 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.145155 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-stats-auth\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.152779 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.171253 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.184646 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8787d05b-35f8-4862-9b4f-53e18d3b56ef-default-certificate\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.191866 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.211398 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.231619 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.251800 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.271889 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.291555 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.311762 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.331666 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.350798 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.354984 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5zmps" event={"ID":"9bb23622-7ae9-45fe-a07d-70c58b4b7f31","Type":"ContainerStarted","Data":"ae419e0c9b6e487aea88499b25935e33a08f458e4f279510d4448bc11a94410a"} Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.355071 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5zmps" event={"ID":"9bb23622-7ae9-45fe-a07d-70c58b4b7f31","Type":"ContainerStarted","Data":"d326cd01bfb4953483e53b706766ab6f625b2e48bc1433eb06a337f6c50f63f1"} Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.355369 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.358220 4772 patch_prober.go:28] interesting pod/console-operator-58897d9998-5zmps container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.358359 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5zmps" podUID="9bb23622-7ae9-45fe-a07d-70c58b4b7f31" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.370755 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.392487 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.412000 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.412918 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.413292 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.430645 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.457488 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.471449 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.491668 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.510995 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.530886 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.551001 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.571321 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.591747 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.612434 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.631436 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.651575 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.668752 4772 request.go:700] Waited for 1.014558858s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.671685 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.692273 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.732140 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.762757 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.771978 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.792727 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.810547 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.830366 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.851915 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.871442 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.891482 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.911382 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.931396 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.951413 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.971192 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 22 10:40:33 crc kubenswrapper[4772]: I1122 10:40:33.990422 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.012131 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.031487 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.051433 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.071877 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.091487 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.111282 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.131914 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.151083 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.172398 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.191155 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.210571 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.230999 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.251506 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.272181 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.291792 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.311348 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.351376 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpb4m\" (UniqueName: \"kubernetes.io/projected/8c2915a8-d452-4234-94a7-f1ec68c95e4a-kube-api-access-vpb4m\") pod \"machine-api-operator-5694c8668f-sgkwc\" (UID: \"8c2915a8-d452-4234-94a7-f1ec68c95e4a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.370914 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2r7p\" (UniqueName: \"kubernetes.io/projected/7e6495dc-3c26-45e6-af62-a4957488ae51-kube-api-access-f2r7p\") pod \"route-controller-manager-6576b87f9c-xmqs2\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.388784 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k2gp\" (UniqueName: \"kubernetes.io/projected/e0f99cb1-427f-4992-8c9b-15c285f13189-kube-api-access-4k2gp\") pod \"apiserver-76f77b778f-56sqr\" (UID: \"e0f99cb1-427f-4992-8c9b-15c285f13189\") " pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.405910 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqk5f\" (UniqueName: \"kubernetes.io/projected/7ce04b81-efe4-4be7-b020-6ea273596c53-kube-api-access-bqk5f\") pod \"openshift-apiserver-operator-796bbdcf4f-2n6h5\" (UID: \"7ce04b81-efe4-4be7-b020-6ea273596c53\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.414598 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.430076 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbjl2\" (UniqueName: \"kubernetes.io/projected/1411a454-22f4-4eef-828a-6a46c81c6c7e-kube-api-access-hbjl2\") pod \"machine-approver-56656f9798-v9s84\" (UID: \"1411a454-22f4-4eef-828a-6a46c81c6c7e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.447155 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6d99\" (UniqueName: \"kubernetes.io/projected/614def41-0349-470c-afca-e5c335fa8834-kube-api-access-c6d99\") pod \"console-f9d7485db-ld9hg\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.455189 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.462228 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.464150 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5zmps" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.467128 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xszvl\" (UniqueName: \"kubernetes.io/projected/a8554d37-40ae-41ef-bed9-7c79b3f8083e-kube-api-access-xszvl\") pod \"downloads-7954f5f757-v2gm9\" (UID: \"a8554d37-40ae-41ef-bed9-7c79b3f8083e\") " pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.474008 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.501780 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcw59\" (UniqueName: \"kubernetes.io/projected/6f4a455f-fdda-46bf-bebf-67f1b83863c8-kube-api-access-pcw59\") pod \"cluster-samples-operator-665b6dd947-wbcgw\" (UID: \"6f4a455f-fdda-46bf-bebf-67f1b83863c8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.515043 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtwvh\" (UniqueName: \"kubernetes.io/projected/e5a02044-ee67-480e-9cc9-22cf07bc9388-kube-api-access-wtwvh\") pod \"apiserver-7bbb656c7d-5znz6\" (UID: \"e5a02044-ee67-480e-9cc9-22cf07bc9388\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.535697 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7578c60c-84c2-4dd5-a6c5-576606438ede-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.559785 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr6mh\" (UniqueName: \"kubernetes.io/projected/ccd507b9-7746-46ee-8bd4-1134cc290f67-kube-api-access-zr6mh\") pod \"openshift-config-operator-7777fb866f-5wvfr\" (UID: \"ccd507b9-7746-46ee-8bd4-1134cc290f67\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.561861 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.568655 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.570671 4772 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.584547 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd4q2\" (UniqueName: \"kubernetes.io/projected/7578c60c-84c2-4dd5-a6c5-576606438ede-kube-api-access-pd4q2\") pod \"cluster-image-registry-operator-dc59b4c8b-b5ph7\" (UID: \"7578c60c-84c2-4dd5-a6c5-576606438ede\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.594748 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.619736 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.623469 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.625742 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.648652 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcd8j\" (UniqueName: \"kubernetes.io/projected/ed063d5e-19cb-42cb-89fb-21b3b751f53e-kube-api-access-jcd8j\") pod \"authentication-operator-69f744f599-ld7g2\" (UID: \"ed063d5e-19cb-42cb-89fb-21b3b751f53e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.668866 4772 request.go:700] Waited for 1.978032996s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.673102 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62vrs\" (UniqueName: \"kubernetes.io/projected/bea9575a-d7c4-4aaa-bc01-eaee90317eea-kube-api-access-62vrs\") pod \"controller-manager-879f6c89f-vvs55\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.674389 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.689815 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht25z\" (UniqueName: \"kubernetes.io/projected/f529668b-54db-49e7-92cb-c3cf6b986dce-kube-api-access-ht25z\") pod \"oauth-openshift-558db77b4-7gdmn\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.692184 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.711921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sgkwc"] Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.714807 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.715010 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.731714 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.735506 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.744528 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.752210 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.775041 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2"] Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.777332 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.796937 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.812901 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 22 10:40:34 crc kubenswrapper[4772]: W1122 10:40:34.821438 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6495dc_3c26_45e6_af62_a4957488ae51.slice/crio-1d66a81e2bff1096b1dcd00424f8535f03c982fe8bf6dbb764cab6cc21081df0 WatchSource:0}: Error finding container 1d66a81e2bff1096b1dcd00424f8535f03c982fe8bf6dbb764cab6cc21081df0: Status 404 returned error can't find the container with id 1d66a81e2bff1096b1dcd00424f8535f03c982fe8bf6dbb764cab6cc21081df0 Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.821612 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ld9hg"] Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.827931 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5"] Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.830994 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.852787 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.871614 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.880455 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.895222 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:34 crc kubenswrapper[4772]: W1122 10:40:34.896492 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod614def41_0349_470c_afca_e5c335fa8834.slice/crio-5fbba05e20db75e660238020d921eefa08ce8357f6ac1f6adf53de501ff20b72 WatchSource:0}: Error finding container 5fbba05e20db75e660238020d921eefa08ce8357f6ac1f6adf53de501ff20b72: Status 404 returned error can't find the container with id 5fbba05e20db75e660238020d921eefa08ce8357f6ac1f6adf53de501ff20b72 Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.919954 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw"] Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.925615 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbtml\" (UniqueName: \"kubernetes.io/projected/b1e29324-c165-4f14-8cf4-f8f62522a87e-kube-api-access-nbtml\") pod \"etcd-operator-b45778765-7wspp\" (UID: \"b1e29324-c165-4f14-8cf4-f8f62522a87e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.934954 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e1caf27-b2b5-4cdf-b500-38d461b637c2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m9s4q\" (UID: \"0e1caf27-b2b5-4cdf-b500-38d461b637c2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.956142 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8wnx\" (UniqueName: \"kubernetes.io/projected/8787d05b-35f8-4862-9b4f-53e18d3b56ef-kube-api-access-j8wnx\") pod \"router-default-5444994796-wc24s\" (UID: \"8787d05b-35f8-4862-9b4f-53e18d3b56ef\") " pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.978149 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.991509 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 22 10:40:34 crc kubenswrapper[4772]: I1122 10:40:34.995207 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.035212 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b94cb323-98c8-4c8c-ac25-48a70275b4ed-signing-cabundle\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.036110 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-tls\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.036259 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-bound-sa-token\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.036320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/723118f2-f91b-4ca0-a6f9-4deaee014ef0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038273 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5de78fd-e1a1-4ec1-9767-336aebc1d19e-metrics-tls\") pod \"dns-operator-744455d44c-km77q\" (UID: \"d5de78fd-e1a1-4ec1-9767-336aebc1d19e\") " pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038324 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l275\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-kube-api-access-9l275\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038369 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf9767c-0ec6-4db4-8ce8-b159703e0173-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038416 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvtsh\" (UniqueName: \"kubernetes.io/projected/d5de78fd-e1a1-4ec1-9767-336aebc1d19e-kube-api-access-tvtsh\") pod \"dns-operator-744455d44c-km77q\" (UID: \"d5de78fd-e1a1-4ec1-9767-336aebc1d19e\") " pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038482 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038519 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/723118f2-f91b-4ca0-a6f9-4deaee014ef0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038585 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76de3e09-61e2-4240-aadb-86c8eaa622f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038663 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76de3e09-61e2-4240-aadb-86c8eaa622f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038714 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd04192-f2a8-4fbd-972a-c74fc6291b66-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038751 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf9767c-0ec6-4db4-8ce8-b159703e0173-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038901 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b94cb323-98c8-4c8c-ac25-48a70275b4ed-signing-key\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.038983 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76de3e09-61e2-4240-aadb-86c8eaa622f3-config\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.039073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc55\" (UniqueName: \"kubernetes.io/projected/b94cb323-98c8-4c8c-ac25-48a70275b4ed-kube-api-access-knc55\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.041365 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.54134288 +0000 UTC m=+155.780787574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.041491 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq27v\" (UniqueName: \"kubernetes.io/projected/bd547fda-024b-4cad-bbfd-f82f5fbd5859-kube-api-access-fq27v\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.041563 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-trusted-ca\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.042865 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c56d4b-3431-4052-b883-6019a008c9aa-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.043220 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t8jgq\" (UID: \"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.043557 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd04192-f2a8-4fbd-972a-c74fc6291b66-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.043727 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd547fda-024b-4cad-bbfd-f82f5fbd5859-trusted-ca\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.043917 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd547fda-024b-4cad-bbfd-f82f5fbd5859-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.044029 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jdvm\" (UniqueName: \"kubernetes.io/projected/9cf9767c-0ec6-4db4-8ce8-b159703e0173-kube-api-access-9jdvm\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.044120 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtwg\" (UniqueName: \"kubernetes.io/projected/c1c56d4b-3431-4052-b883-6019a008c9aa-kube-api-access-zxtwg\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.044153 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd04192-f2a8-4fbd-972a-c74fc6291b66-config\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.044185 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bd547fda-024b-4cad-bbfd-f82f5fbd5859-metrics-tls\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.044365 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn58v\" (UniqueName: \"kubernetes.io/projected/9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2-kube-api-access-wn58v\") pod \"multus-admission-controller-857f4d67dd-t8jgq\" (UID: \"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.044544 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-certificates\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.044613 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1c56d4b-3431-4052-b883-6019a008c9aa-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.148865 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v2gm9"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149420 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.149614 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.649584492 +0000 UTC m=+155.889029006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149638 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-registration-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149703 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vvs55"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149743 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/723118f2-f91b-4ca0-a6f9-4deaee014ef0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149824 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76de3e09-61e2-4240-aadb-86c8eaa622f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149876 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76de3e09-61e2-4240-aadb-86c8eaa622f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149910 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd04192-f2a8-4fbd-972a-c74fc6291b66-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.149962 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf9767c-0ec6-4db4-8ce8-b159703e0173-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150011 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b94cb323-98c8-4c8c-ac25-48a70275b4ed-signing-key\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150155 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw95k\" (UniqueName: \"kubernetes.io/projected/74f425d7-1955-455d-88ee-37ffccfb8c8c-kube-api-access-tw95k\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150192 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a81453df-6689-461b-8b0f-389b50452f08-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5sxdt\" (UID: \"a81453df-6689-461b-8b0f-389b50452f08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwrmm\" (UniqueName: \"kubernetes.io/projected/30a3f834-9426-4bd0-908e-4974c60576ff-kube-api-access-lwrmm\") pod \"ingress-canary-vhvzn\" (UID: \"30a3f834-9426-4bd0-908e-4974c60576ff\") " pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150264 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76de3e09-61e2-4240-aadb-86c8eaa622f3-config\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150291 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc55\" (UniqueName: \"kubernetes.io/projected/b94cb323-98c8-4c8c-ac25-48a70275b4ed-kube-api-access-knc55\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150317 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f425d7-1955-455d-88ee-37ffccfb8c8c-profile-collector-cert\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150341 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3099692-2c12-4f17-b161-1c17e9f13aed-serving-cert\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150353 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/723118f2-f91b-4ca0-a6f9-4deaee014ef0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150368 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjn5w\" (UniqueName: \"kubernetes.io/projected/17ee7fc7-887d-4bf4-b408-d1a723605bdc-kube-api-access-hjn5w\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150403 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq27v\" (UniqueName: \"kubernetes.io/projected/bd547fda-024b-4cad-bbfd-f82f5fbd5859-kube-api-access-fq27v\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150431 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b706176c-178e-48c8-94eb-faa069d602cb-node-bootstrap-token\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150462 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-trusted-ca\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150559 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a26a8d34-6b49-4019-b262-7f8e6fddc433-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g7h5s\" (UID: \"a26a8d34-6b49-4019-b262-7f8e6fddc433\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150591 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90789491-84d5-4454-ba6c-9b55634b5c74-proxy-tls\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150618 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0d00201-7f70-493c-a471-e319513076b3-srv-cert\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150672 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-webhook-cert\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150698 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8zt\" (UniqueName: \"kubernetes.io/projected/a26a8d34-6b49-4019-b262-7f8e6fddc433-kube-api-access-rl8zt\") pod \"control-plane-machine-set-operator-78cbb6b69f-g7h5s\" (UID: \"a26a8d34-6b49-4019-b262-7f8e6fddc433\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c56d4b-3431-4052-b883-6019a008c9aa-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150746 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-mountpoint-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150849 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-apiservice-cert\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.150972 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t8jgq\" (UID: \"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.151009 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3099692-2c12-4f17-b161-1c17e9f13aed-config\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.151072 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd04192-f2a8-4fbd-972a-c74fc6291b66-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.151122 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck8zz\" (UniqueName: \"kubernetes.io/projected/b706176c-178e-48c8-94eb-faa069d602cb-kube-api-access-ck8zz\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.151158 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd547fda-024b-4cad-bbfd-f82f5fbd5859-trusted-ca\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.151187 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5917cd-29a5-4e07-b030-24456d6b0da6-secret-volume\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.151214 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkf2f\" (UniqueName: \"kubernetes.io/projected/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-kube-api-access-jkf2f\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.152565 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76de3e09-61e2-4240-aadb-86c8eaa622f3-config\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.152860 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm7mj\" (UniqueName: \"kubernetes.io/projected/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-kube-api-access-vm7mj\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.152937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd547fda-024b-4cad-bbfd-f82f5fbd5859-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153007 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-socket-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153088 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchkl\" (UniqueName: \"kubernetes.io/projected/f0d00201-7f70-493c-a471-e319513076b3-kube-api-access-mchkl\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtwg\" (UniqueName: \"kubernetes.io/projected/c1c56d4b-3431-4052-b883-6019a008c9aa-kube-api-access-zxtwg\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153185 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd04192-f2a8-4fbd-972a-c74fc6291b66-config\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153224 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jdvm\" (UniqueName: \"kubernetes.io/projected/9cf9767c-0ec6-4db4-8ce8-b159703e0173-kube-api-access-9jdvm\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153269 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca10904e-f1cc-40f0-ba83-f4711606e7f3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bd547fda-024b-4cad-bbfd-f82f5fbd5859-metrics-tls\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153788 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1c56d4b-3431-4052-b883-6019a008c9aa-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.153592 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b706176c-178e-48c8-94eb-faa069d602cb-certs\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155717 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-plugins-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155748 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f425d7-1955-455d-88ee-37ffccfb8c8c-srv-cert\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155799 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn58v\" (UniqueName: \"kubernetes.io/projected/9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2-kube-api-access-wn58v\") pod \"multus-admission-controller-857f4d67dd-t8jgq\" (UID: \"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155870 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5917cd-29a5-4e07-b030-24456d6b0da6-config-volume\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155896 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4n6h\" (UniqueName: \"kubernetes.io/projected/90789491-84d5-4454-ba6c-9b55634b5c74-kube-api-access-d4n6h\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155920 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvxg\" (UniqueName: \"kubernetes.io/projected/b3099692-2c12-4f17-b161-1c17e9f13aed-kube-api-access-snvxg\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155984 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-certificates\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.156033 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.156305 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd04192-f2a8-4fbd-972a-c74fc6291b66-config\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.155484 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd547fda-024b-4cad-bbfd-f82f5fbd5859-trusted-ca\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.157945 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1c56d4b-3431-4052-b883-6019a008c9aa-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.158020 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-config-volume\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.158342 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bd547fda-024b-4cad-bbfd-f82f5fbd5859-metrics-tls\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.158753 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cf9767c-0ec6-4db4-8ce8-b159703e0173-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.158854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fknz\" (UniqueName: \"kubernetes.io/projected/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-kube-api-access-9fknz\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.159154 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tcml\" (UniqueName: \"kubernetes.io/projected/df5917cd-29a5-4e07-b030-24456d6b0da6-kube-api-access-7tcml\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.161290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1c56d4b-3431-4052-b883-6019a008c9aa-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.161607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca10904e-f1cc-40f0-ba83-f4711606e7f3-proxy-tls\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.161715 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxtvm\" (UniqueName: \"kubernetes.io/projected/a81453df-6689-461b-8b0f-389b50452f08-kube-api-access-jxtvm\") pod \"package-server-manager-789f6589d5-5sxdt\" (UID: \"a81453df-6689-461b-8b0f-389b50452f08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.161755 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-metrics-tls\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.161918 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b94cb323-98c8-4c8c-ac25-48a70275b4ed-signing-cabundle\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.162102 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90789491-84d5-4454-ba6c-9b55634b5c74-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.162240 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-tls\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.162295 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-bound-sa-token\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.162428 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/723118f2-f91b-4ca0-a6f9-4deaee014ef0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.162491 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.163138 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdfv8\" (UniqueName: \"kubernetes.io/projected/ca10904e-f1cc-40f0-ba83-f4711606e7f3-kube-api-access-cdfv8\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.163180 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5de78fd-e1a1-4ec1-9767-336aebc1d19e-metrics-tls\") pod \"dns-operator-744455d44c-km77q\" (UID: \"d5de78fd-e1a1-4ec1-9767-336aebc1d19e\") " pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.164246 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90789491-84d5-4454-ba6c-9b55634b5c74-images\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165446 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l275\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-kube-api-access-9l275\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165627 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf9767c-0ec6-4db4-8ce8-b159703e0173-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165811 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-tmpfs\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165841 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-csi-data-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165861 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnsdj\" (UniqueName: \"kubernetes.io/projected/7b9b1e33-9a6a-4d0f-af54-5589be658a3a-kube-api-access-hnsdj\") pod \"migrator-59844c95c7-kqhcv\" (UID: \"7b9b1e33-9a6a-4d0f-af54-5589be658a3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165888 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvtsh\" (UniqueName: \"kubernetes.io/projected/d5de78fd-e1a1-4ec1-9767-336aebc1d19e-kube-api-access-tvtsh\") pod \"dns-operator-744455d44c-km77q\" (UID: \"d5de78fd-e1a1-4ec1-9767-336aebc1d19e\") " pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165911 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30a3f834-9426-4bd0-908e-4974c60576ff-cert\") pod \"ingress-canary-vhvzn\" (UID: \"30a3f834-9426-4bd0-908e-4974c60576ff\") " pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0d00201-7f70-493c-a471-e319513076b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.165964 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.167324 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b94cb323-98c8-4c8c-ac25-48a70275b4ed-signing-cabundle\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.167705 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf9767c-0ec6-4db4-8ce8-b159703e0173-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.167729 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.667607885 +0000 UTC m=+155.907052379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.173491 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t8jgq\" (UID: \"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.175109 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b94cb323-98c8-4c8c-ac25-48a70275b4ed-signing-key\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: W1122 10:40:35.175285 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8554d37_40ae_41ef_bed9_7c79b3f8083e.slice/crio-ee7fc4e0ba14b4befbd2334544b3ba8e13d9cf18324ace619413ff8bfdbccef8 WatchSource:0}: Error finding container ee7fc4e0ba14b4befbd2334544b3ba8e13d9cf18324ace619413ff8bfdbccef8: Status 404 returned error can't find the container with id ee7fc4e0ba14b4befbd2334544b3ba8e13d9cf18324ace619413ff8bfdbccef8 Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.175568 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76de3e09-61e2-4240-aadb-86c8eaa622f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.179325 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/723118f2-f91b-4ca0-a6f9-4deaee014ef0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.179460 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-trusted-ca\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.180873 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-tls\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.182835 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd04192-f2a8-4fbd-972a-c74fc6291b66-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.183762 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-certificates\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.191720 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5de78fd-e1a1-4ec1-9767-336aebc1d19e-metrics-tls\") pod \"dns-operator-744455d44c-km77q\" (UID: \"d5de78fd-e1a1-4ec1-9767-336aebc1d19e\") " pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.196445 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cd04192-f2a8-4fbd-972a-c74fc6291b66-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5lcxp\" (UID: \"1cd04192-f2a8-4fbd-972a-c74fc6291b66\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.215580 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jdvm\" (UniqueName: \"kubernetes.io/projected/9cf9767c-0ec6-4db4-8ce8-b159703e0173-kube-api-access-9jdvm\") pod \"kube-storage-version-migrator-operator-b67b599dd-nnvcq\" (UID: \"9cf9767c-0ec6-4db4-8ce8-b159703e0173\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.222437 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.236327 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.238800 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ld7g2"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.240956 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.242771 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.242996 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-56sqr"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.252360 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn58v\" (UniqueName: \"kubernetes.io/projected/9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2-kube-api-access-wn58v\") pod \"multus-admission-controller-857f4d67dd-t8jgq\" (UID: \"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.255307 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268025 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268340 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3099692-2c12-4f17-b161-1c17e9f13aed-config\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268379 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck8zz\" (UniqueName: \"kubernetes.io/projected/b706176c-178e-48c8-94eb-faa069d602cb-kube-api-access-ck8zz\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268405 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5917cd-29a5-4e07-b030-24456d6b0da6-secret-volume\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268445 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkf2f\" (UniqueName: \"kubernetes.io/projected/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-kube-api-access-jkf2f\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm7mj\" (UniqueName: \"kubernetes.io/projected/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-kube-api-access-vm7mj\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268514 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-socket-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268536 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchkl\" (UniqueName: \"kubernetes.io/projected/f0d00201-7f70-493c-a471-e319513076b3-kube-api-access-mchkl\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268564 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca10904e-f1cc-40f0-ba83-f4711606e7f3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b706176c-178e-48c8-94eb-faa069d602cb-certs\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268656 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-plugins-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268698 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f425d7-1955-455d-88ee-37ffccfb8c8c-srv-cert\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5917cd-29a5-4e07-b030-24456d6b0da6-config-volume\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268746 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4n6h\" (UniqueName: \"kubernetes.io/projected/90789491-84d5-4454-ba6c-9b55634b5c74-kube-api-access-d4n6h\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268768 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snvxg\" (UniqueName: \"kubernetes.io/projected/b3099692-2c12-4f17-b161-1c17e9f13aed-kube-api-access-snvxg\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268805 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268813 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76de3e09-61e2-4240-aadb-86c8eaa622f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b4j6f\" (UID: \"76de3e09-61e2-4240-aadb-86c8eaa622f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268836 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-config-volume\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268867 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fknz\" (UniqueName: \"kubernetes.io/projected/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-kube-api-access-9fknz\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268952 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tcml\" (UniqueName: \"kubernetes.io/projected/df5917cd-29a5-4e07-b030-24456d6b0da6-kube-api-access-7tcml\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.268984 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca10904e-f1cc-40f0-ba83-f4711606e7f3-proxy-tls\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269029 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxtvm\" (UniqueName: \"kubernetes.io/projected/a81453df-6689-461b-8b0f-389b50452f08-kube-api-access-jxtvm\") pod \"package-server-manager-789f6589d5-5sxdt\" (UID: \"a81453df-6689-461b-8b0f-389b50452f08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269081 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-metrics-tls\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269126 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90789491-84d5-4454-ba6c-9b55634b5c74-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269139 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-socket-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269202 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.269229 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.769210823 +0000 UTC m=+156.008655317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90789491-84d5-4454-ba6c-9b55634b5c74-images\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269276 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdfv8\" (UniqueName: \"kubernetes.io/projected/ca10904e-f1cc-40f0-ba83-f4711606e7f3-kube-api-access-cdfv8\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269319 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-tmpfs\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269344 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-csi-data-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269366 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnsdj\" (UniqueName: \"kubernetes.io/projected/7b9b1e33-9a6a-4d0f-af54-5589be658a3a-kube-api-access-hnsdj\") pod \"migrator-59844c95c7-kqhcv\" (UID: \"7b9b1e33-9a6a-4d0f-af54-5589be658a3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269405 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30a3f834-9426-4bd0-908e-4974c60576ff-cert\") pod \"ingress-canary-vhvzn\" (UID: \"30a3f834-9426-4bd0-908e-4974c60576ff\") " pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269431 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0d00201-7f70-493c-a471-e319513076b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269472 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269505 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-registration-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw95k\" (UniqueName: \"kubernetes.io/projected/74f425d7-1955-455d-88ee-37ffccfb8c8c-kube-api-access-tw95k\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269624 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a81453df-6689-461b-8b0f-389b50452f08-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5sxdt\" (UID: \"a81453df-6689-461b-8b0f-389b50452f08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269658 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwrmm\" (UniqueName: \"kubernetes.io/projected/30a3f834-9426-4bd0-908e-4974c60576ff-kube-api-access-lwrmm\") pod \"ingress-canary-vhvzn\" (UID: \"30a3f834-9426-4bd0-908e-4974c60576ff\") " pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f425d7-1955-455d-88ee-37ffccfb8c8c-profile-collector-cert\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3099692-2c12-4f17-b161-1c17e9f13aed-serving-cert\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjn5w\" (UniqueName: \"kubernetes.io/projected/17ee7fc7-887d-4bf4-b408-d1a723605bdc-kube-api-access-hjn5w\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269779 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3099692-2c12-4f17-b161-1c17e9f13aed-config\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269790 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b706176c-178e-48c8-94eb-faa069d602cb-node-bootstrap-token\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269842 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a26a8d34-6b49-4019-b262-7f8e6fddc433-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g7h5s\" (UID: \"a26a8d34-6b49-4019-b262-7f8e6fddc433\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269893 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca10904e-f1cc-40f0-ba83-f4711606e7f3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90789491-84d5-4454-ba6c-9b55634b5c74-proxy-tls\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269956 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0d00201-7f70-493c-a471-e319513076b3-srv-cert\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.269984 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-webhook-cert\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.270021 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl8zt\" (UniqueName: \"kubernetes.io/projected/a26a8d34-6b49-4019-b262-7f8e6fddc433-kube-api-access-rl8zt\") pod \"control-plane-machine-set-operator-78cbb6b69f-g7h5s\" (UID: \"a26a8d34-6b49-4019-b262-7f8e6fddc433\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.270061 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-mountpoint-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.270086 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-apiservice-cert\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.271317 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-tmpfs\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.271617 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90789491-84d5-4454-ba6c-9b55634b5c74-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.271705 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-csi-data-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.271905 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/90789491-84d5-4454-ba6c-9b55634b5c74-images\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.272117 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-plugins-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.273072 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-mountpoint-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.273196 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.773184417 +0000 UTC m=+156.012628911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.273678 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5917cd-29a5-4e07-b030-24456d6b0da6-config-volume\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.274101 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5917cd-29a5-4e07-b030-24456d6b0da6-secret-volume\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.274311 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.274773 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90789491-84d5-4454-ba6c-9b55634b5c74-proxy-tls\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.274860 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-metrics-tls\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.274963 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-config-volume\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.275115 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.275431 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0d00201-7f70-493c-a471-e319513076b3-srv-cert\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.275577 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17ee7fc7-887d-4bf4-b408-d1a723605bdc-registration-dir\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.276372 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0d00201-7f70-493c-a471-e319513076b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.276921 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.277027 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b706176c-178e-48c8-94eb-faa069d602cb-certs\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.278430 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f425d7-1955-455d-88ee-37ffccfb8c8c-profile-collector-cert\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.278668 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca10904e-f1cc-40f0-ba83-f4711606e7f3-proxy-tls\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.279181 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f425d7-1955-455d-88ee-37ffccfb8c8c-srv-cert\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.279493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3099692-2c12-4f17-b161-1c17e9f13aed-serving-cert\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.280169 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a81453df-6689-461b-8b0f-389b50452f08-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5sxdt\" (UID: \"a81453df-6689-461b-8b0f-389b50452f08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.280193 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-apiservice-cert\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.282533 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30a3f834-9426-4bd0-908e-4974c60576ff-cert\") pod \"ingress-canary-vhvzn\" (UID: \"30a3f834-9426-4bd0-908e-4974c60576ff\") " pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.282554 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gdmn"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.282641 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-webhook-cert\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.284644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b706176c-178e-48c8-94eb-faa069d602cb-node-bootstrap-token\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.284926 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.287181 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd547fda-024b-4cad-bbfd-f82f5fbd5859-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.288466 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a26a8d34-6b49-4019-b262-7f8e6fddc433-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g7h5s\" (UID: \"a26a8d34-6b49-4019-b262-7f8e6fddc433\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.290741 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc55\" (UniqueName: \"kubernetes.io/projected/b94cb323-98c8-4c8c-ac25-48a70275b4ed-kube-api-access-knc55\") pod \"service-ca-9c57cc56f-t76z9\" (UID: \"b94cb323-98c8-4c8c-ac25-48a70275b4ed\") " pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.297079 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.308442 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.308525 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq27v\" (UniqueName: \"kubernetes.io/projected/bd547fda-024b-4cad-bbfd-f82f5fbd5859-kube-api-access-fq27v\") pod \"ingress-operator-5b745b69d9-bgkbs\" (UID: \"bd547fda-024b-4cad-bbfd-f82f5fbd5859\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: W1122 10:40:35.309002 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5a02044_ee67_480e_9cc9_22cf07bc9388.slice/crio-b3aa60ce8596b5dcc255afffb91689b62d5465d48ad06527ee455494acefdfb5 WatchSource:0}: Error finding container b3aa60ce8596b5dcc255afffb91689b62d5465d48ad06527ee455494acefdfb5: Status 404 returned error can't find the container with id b3aa60ce8596b5dcc255afffb91689b62d5465d48ad06527ee455494acefdfb5 Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.319400 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.331277 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtwg\" (UniqueName: \"kubernetes.io/projected/c1c56d4b-3431-4052-b883-6019a008c9aa-kube-api-access-zxtwg\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmbgm\" (UID: \"c1c56d4b-3431-4052-b883-6019a008c9aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.347230 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-bound-sa-token\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.367506 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" event={"ID":"f529668b-54db-49e7-92cb-c3cf6b986dce","Type":"ContainerStarted","Data":"c53df660dc1b150fb9c979f4ddfc73326bc33b73e195c108d35ec6d23671f732"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.370939 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.371129 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.871098998 +0000 UTC m=+156.110543492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.371292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" event={"ID":"7e6495dc-3c26-45e6-af62-a4957488ae51","Type":"ContainerStarted","Data":"1d66a81e2bff1096b1dcd00424f8535f03c982fe8bf6dbb764cab6cc21081df0"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.371526 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.371939 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.87192517 +0000 UTC m=+156.111369844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.372502 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l275\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-kube-api-access-9l275\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.372704 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v2gm9" event={"ID":"a8554d37-40ae-41ef-bed9-7c79b3f8083e","Type":"ContainerStarted","Data":"ee7fc4e0ba14b4befbd2334544b3ba8e13d9cf18324ace619413ff8bfdbccef8"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.377316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" event={"ID":"e5a02044-ee67-480e-9cc9-22cf07bc9388","Type":"ContainerStarted","Data":"b3aa60ce8596b5dcc255afffb91689b62d5465d48ad06527ee455494acefdfb5"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.378805 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" event={"ID":"e0f99cb1-427f-4992-8c9b-15c285f13189","Type":"ContainerStarted","Data":"f67c8b3580da0ff1fbcf8ab904129be58a85afed33c80c7677642333bf217310"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.380229 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" event={"ID":"ed063d5e-19cb-42cb-89fb-21b3b751f53e","Type":"ContainerStarted","Data":"0a3d1d315491c26fcf5e939139c4457a5d22dac63e5af3d035aee510d71e9b40"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.381575 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ld9hg" event={"ID":"614def41-0349-470c-afca-e5c335fa8834","Type":"ContainerStarted","Data":"5fbba05e20db75e660238020d921eefa08ce8357f6ac1f6adf53de501ff20b72"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.382813 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" event={"ID":"7ce04b81-efe4-4be7-b020-6ea273596c53","Type":"ContainerStarted","Data":"1a9bab683e7c984320bfb941aed0adfc175913bd35a2c87e246459a875ce4c74"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.384334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" event={"ID":"8c2915a8-d452-4234-94a7-f1ec68c95e4a","Type":"ContainerStarted","Data":"e13a45e84b81795589b08fb243c80ac8fbd02bb4f898fc36b38ff67a11d68007"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.386299 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" event={"ID":"1411a454-22f4-4eef-828a-6a46c81c6c7e","Type":"ContainerStarted","Data":"ea7bd1774e31ebdb16cc3d548e05597f10e407d615455ab0e72dec97694c5804"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.388495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" event={"ID":"bea9575a-d7c4-4aaa-bc01-eaee90317eea","Type":"ContainerStarted","Data":"60ab0be0c4ef51aa2b527563e794c436f362f3915866568b2adc9ed51af5b04c"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.395892 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvtsh\" (UniqueName: \"kubernetes.io/projected/d5de78fd-e1a1-4ec1-9767-336aebc1d19e-kube-api-access-tvtsh\") pod \"dns-operator-744455d44c-km77q\" (UID: \"d5de78fd-e1a1-4ec1-9767-336aebc1d19e\") " pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.396203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" event={"ID":"ccd507b9-7746-46ee-8bd4-1134cc290f67","Type":"ContainerStarted","Data":"ab0525713614cc070f3a6462f6cb08ac833ee757306af2c75fd94ae430d20d8b"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.397753 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" event={"ID":"7578c60c-84c2-4dd5-a6c5-576606438ede","Type":"ContainerStarted","Data":"ccd5f272cdba0f2343c27a62a038677e2c06633c37a170e36825bb4f0ca5acfc"} Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.430593 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fknz\" (UniqueName: \"kubernetes.io/projected/ad185e3d-1a10-4e09-9c56-41a4ae9a435c-kube-api-access-9fknz\") pod \"dns-default-9vsnk\" (UID: \"ad185e3d-1a10-4e09-9c56-41a4ae9a435c\") " pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.449718 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchkl\" (UniqueName: \"kubernetes.io/projected/f0d00201-7f70-493c-a471-e319513076b3-kube-api-access-mchkl\") pod \"olm-operator-6b444d44fb-hn48r\" (UID: \"f0d00201-7f70-493c-a471-e319513076b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.468198 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck8zz\" (UniqueName: \"kubernetes.io/projected/b706176c-178e-48c8-94eb-faa069d602cb-kube-api-access-ck8zz\") pod \"machine-config-server-2g4sz\" (UID: \"b706176c-178e-48c8-94eb-faa069d602cb\") " pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.473452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.473870 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.973829775 +0000 UTC m=+156.213274279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.474258 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.475247 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:35.975228152 +0000 UTC m=+156.214672646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.494033 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkf2f\" (UniqueName: \"kubernetes.io/projected/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-kube-api-access-jkf2f\") pod \"marketplace-operator-79b997595-8z66s\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.498440 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.509199 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm7mj\" (UniqueName: \"kubernetes.io/projected/f66fcaaf-2b15-40db-9e22-3a0d098d56f2-kube-api-access-vm7mj\") pod \"packageserver-d55dfcdfc-dfw97\" (UID: \"f66fcaaf-2b15-40db-9e22-3a0d098d56f2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: W1122 10:40:35.509213 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8787d05b_35f8_4862_9b4f_53e18d3b56ef.slice/crio-b47bb3626b74ba73ea7fe53ab44136c0851be0c2c66fe6df096a3b3f0402a817 WatchSource:0}: Error finding container b47bb3626b74ba73ea7fe53ab44136c0851be0c2c66fe6df096a3b3f0402a817: Status 404 returned error can't find the container with id b47bb3626b74ba73ea7fe53ab44136c0851be0c2c66fe6df096a3b3f0402a817 Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.513758 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2g4sz" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.514241 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-km77q" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.523094 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.538619 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7wspp"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.542311 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snvxg\" (UniqueName: \"kubernetes.io/projected/b3099692-2c12-4f17-b161-1c17e9f13aed-kube-api-access-snvxg\") pod \"service-ca-operator-777779d784-qcjgh\" (UID: \"b3099692-2c12-4f17-b161-1c17e9f13aed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.558967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxtvm\" (UniqueName: \"kubernetes.io/projected/a81453df-6689-461b-8b0f-389b50452f08-kube-api-access-jxtvm\") pod \"package-server-manager-789f6589d5-5sxdt\" (UID: \"a81453df-6689-461b-8b0f-389b50452f08\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.559482 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.576195 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.576415 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.076381188 +0000 UTC m=+156.315825682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.576480 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tcml\" (UniqueName: \"kubernetes.io/projected/df5917cd-29a5-4e07-b030-24456d6b0da6-kube-api-access-7tcml\") pod \"collect-profiles-29396790-lvhxw\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.576598 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.577088 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.077069296 +0000 UTC m=+156.316513790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.593286 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdfv8\" (UniqueName: \"kubernetes.io/projected/ca10904e-f1cc-40f0-ba83-f4711606e7f3-kube-api-access-cdfv8\") pod \"machine-config-controller-84d6567774-lk87v\" (UID: \"ca10904e-f1cc-40f0-ba83-f4711606e7f3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.616135 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4n6h\" (UniqueName: \"kubernetes.io/projected/90789491-84d5-4454-ba6c-9b55634b5c74-kube-api-access-d4n6h\") pod \"machine-config-operator-74547568cd-vp9gx\" (UID: \"90789491-84d5-4454-ba6c-9b55634b5c74\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.636649 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.636733 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnsdj\" (UniqueName: \"kubernetes.io/projected/7b9b1e33-9a6a-4d0f-af54-5589be658a3a-kube-api-access-hnsdj\") pod \"migrator-59844c95c7-kqhcv\" (UID: \"7b9b1e33-9a6a-4d0f-af54-5589be658a3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.639606 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.640927 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.650858 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.664449 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.670827 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl8zt\" (UniqueName: \"kubernetes.io/projected/a26a8d34-6b49-4019-b262-7f8e6fddc433-kube-api-access-rl8zt\") pod \"control-plane-machine-set-operator-78cbb6b69f-g7h5s\" (UID: \"a26a8d34-6b49-4019-b262-7f8e6fddc433\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.679096 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.679693 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.17966555 +0000 UTC m=+156.419110044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.682983 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwrmm\" (UniqueName: \"kubernetes.io/projected/30a3f834-9426-4bd0-908e-4974c60576ff-kube-api-access-lwrmm\") pod \"ingress-canary-vhvzn\" (UID: \"30a3f834-9426-4bd0-908e-4974c60576ff\") " pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.683406 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.684281 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t8jgq"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.690243 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.690665 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjn5w\" (UniqueName: \"kubernetes.io/projected/17ee7fc7-887d-4bf4-b408-d1a723605bdc-kube-api-access-hjn5w\") pod \"csi-hostpathplugin-jzjnj\" (UID: \"17ee7fc7-887d-4bf4-b408-d1a723605bdc\") " pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.698996 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.710239 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw95k\" (UniqueName: \"kubernetes.io/projected/74f425d7-1955-455d-88ee-37ffccfb8c8c-kube-api-access-tw95k\") pod \"catalog-operator-68c6474976-lkn87\" (UID: \"74f425d7-1955-455d-88ee-37ffccfb8c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.736074 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.765804 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.777661 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.780766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.781113 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.281100653 +0000 UTC m=+156.520545147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.794337 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" Nov 22 10:40:35 crc kubenswrapper[4772]: W1122 10:40:35.795947 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e1caf27_b2b5_4cdf_b500_38d461b637c2.slice/crio-78bd66a5898dc2a20e74af0b58a663c13d2a7a6c4619b091f1d72adca9961bed WatchSource:0}: Error finding container 78bd66a5898dc2a20e74af0b58a663c13d2a7a6c4619b091f1d72adca9961bed: Status 404 returned error can't find the container with id 78bd66a5898dc2a20e74af0b58a663c13d2a7a6c4619b091f1d72adca9961bed Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.802415 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.814115 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vhvzn" Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.835942 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.875089 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t76z9"] Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.881635 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.882212 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.382190897 +0000 UTC m=+156.621635391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:35 crc kubenswrapper[4772]: W1122 10:40:35.943593 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb94cb323_98c8_4c8c_ac25_48a70275b4ed.slice/crio-18643054991c778f052e14f72b7b1117ac8b99e23a970c8961046912c2eaf982 WatchSource:0}: Error finding container 18643054991c778f052e14f72b7b1117ac8b99e23a970c8961046912c2eaf982: Status 404 returned error can't find the container with id 18643054991c778f052e14f72b7b1117ac8b99e23a970c8961046912c2eaf982 Nov 22 10:40:35 crc kubenswrapper[4772]: I1122 10:40:35.983779 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:35 crc kubenswrapper[4772]: E1122 10:40:35.984274 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.484251477 +0000 UTC m=+156.723695971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.012608 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.029433 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f"] Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.087013 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.087465 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.587439966 +0000 UTC m=+156.826884460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.111062 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs"] Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.188853 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.189259 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.689241959 +0000 UTC m=+156.928686453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.214119 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh"] Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.261861 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv"] Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.289699 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.289881 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.789844951 +0000 UTC m=+157.029289455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.290076 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.290691 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.790677652 +0000 UTC m=+157.030122326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.365053 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-km77q"] Nov 22 10:40:36 crc kubenswrapper[4772]: W1122 10:40:36.370340 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76de3e09_61e2_4240_aadb_86c8eaa622f3.slice/crio-4469de86b4840e8fd297dddb8bcd19ef4e921d49701e532b52a2d1d586549c90 WatchSource:0}: Error finding container 4469de86b4840e8fd297dddb8bcd19ef4e921d49701e532b52a2d1d586549c90: Status 404 returned error can't find the container with id 4469de86b4840e8fd297dddb8bcd19ef4e921d49701e532b52a2d1d586549c90 Nov 22 10:40:36 crc kubenswrapper[4772]: W1122 10:40:36.379291 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd547fda_024b_4cad_bbfd_f82f5fbd5859.slice/crio-961af1ac3571edb3c1c784b14676f05c340f9b21860584d22e177890e76e5a5a WatchSource:0}: Error finding container 961af1ac3571edb3c1c784b14676f05c340f9b21860584d22e177890e76e5a5a: Status 404 returned error can't find the container with id 961af1ac3571edb3c1c784b14676f05c340f9b21860584d22e177890e76e5a5a Nov 22 10:40:36 crc kubenswrapper[4772]: W1122 10:40:36.383591 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3099692_2c12_4f17_b161_1c17e9f13aed.slice/crio-a658d2486af5e437a6de606ac0f6e5798f06a7fca4a786ad47055ca916ff89c0 WatchSource:0}: Error finding container a658d2486af5e437a6de606ac0f6e5798f06a7fca4a786ad47055ca916ff89c0: Status 404 returned error can't find the container with id a658d2486af5e437a6de606ac0f6e5798f06a7fca4a786ad47055ca916ff89c0 Nov 22 10:40:36 crc kubenswrapper[4772]: W1122 10:40:36.389641 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b9b1e33_9a6a_4d0f_af54_5589be658a3a.slice/crio-6e22e92ed0fe5158625a1f1383b38ac180f36206d98a30a6a6ff3ddb748d7547 WatchSource:0}: Error finding container 6e22e92ed0fe5158625a1f1383b38ac180f36206d98a30a6a6ff3ddb748d7547: Status 404 returned error can't find the container with id 6e22e92ed0fe5158625a1f1383b38ac180f36206d98a30a6a6ff3ddb748d7547 Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.391103 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.391461 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.891425688 +0000 UTC m=+157.130870182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.391664 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.392172 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.892152827 +0000 UTC m=+157.131597321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: W1122 10:40:36.399230 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5de78fd_e1a1_4ec1_9767_336aebc1d19e.slice/crio-31644aac2d8b634f565c60e89f7613329216baa6b5d4349b3b9323d3ab009ac4 WatchSource:0}: Error finding container 31644aac2d8b634f565c60e89f7613329216baa6b5d4349b3b9323d3ab009ac4: Status 404 returned error can't find the container with id 31644aac2d8b634f565c60e89f7613329216baa6b5d4349b3b9323d3ab009ac4 Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.413246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" event={"ID":"7e6495dc-3c26-45e6-af62-a4957488ae51","Type":"ContainerStarted","Data":"0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.417459 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" event={"ID":"7b9b1e33-9a6a-4d0f-af54-5589be658a3a","Type":"ContainerStarted","Data":"6e22e92ed0fe5158625a1f1383b38ac180f36206d98a30a6a6ff3ddb748d7547"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.423817 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ld9hg" event={"ID":"614def41-0349-470c-afca-e5c335fa8834","Type":"ContainerStarted","Data":"28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.426221 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" event={"ID":"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2","Type":"ContainerStarted","Data":"60653a180b23a488b9f595b5b2226693ab0a44f57c3b04ecaf6a3e118857a173"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.427613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" event={"ID":"b1e29324-c165-4f14-8cf4-f8f62522a87e","Type":"ContainerStarted","Data":"866b01f8e9ac2b511a2c4cfb808df3d6abff0b7362dbce6d09c681249b369179"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.432093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" event={"ID":"6f4a455f-fdda-46bf-bebf-67f1b83863c8","Type":"ContainerStarted","Data":"d0e59214ea9310b8c7dae73c87f5f1843c932c3614148d099a35f76517e14751"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.434717 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" event={"ID":"7ce04b81-efe4-4be7-b020-6ea273596c53","Type":"ContainerStarted","Data":"efb483680d1d45c7927b224f4683edefcd1f580da51348159c89e4fb1a88e6e8"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.437304 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" event={"ID":"1cd04192-f2a8-4fbd-972a-c74fc6291b66","Type":"ContainerStarted","Data":"b59b95b7792830786b1d693c7b0418043bd9ae219d3eade9bf928d5efc467339"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.438626 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wc24s" event={"ID":"8787d05b-35f8-4862-9b4f-53e18d3b56ef","Type":"ContainerStarted","Data":"b47bb3626b74ba73ea7fe53ab44136c0851be0c2c66fe6df096a3b3f0402a817"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.446316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" event={"ID":"76de3e09-61e2-4240-aadb-86c8eaa622f3","Type":"ContainerStarted","Data":"4469de86b4840e8fd297dddb8bcd19ef4e921d49701e532b52a2d1d586549c90"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.448160 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" event={"ID":"b3099692-2c12-4f17-b161-1c17e9f13aed","Type":"ContainerStarted","Data":"a658d2486af5e437a6de606ac0f6e5798f06a7fca4a786ad47055ca916ff89c0"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.450970 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" event={"ID":"b94cb323-98c8-4c8c-ac25-48a70275b4ed","Type":"ContainerStarted","Data":"18643054991c778f052e14f72b7b1117ac8b99e23a970c8961046912c2eaf982"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.453774 4772 generic.go:334] "Generic (PLEG): container finished" podID="ccd507b9-7746-46ee-8bd4-1134cc290f67" containerID="5a2e7bd281a7160f690d4865f29fc85f763b0708eb569c35ad3ae386f971d81d" exitCode=0 Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.453914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" event={"ID":"ccd507b9-7746-46ee-8bd4-1134cc290f67","Type":"ContainerDied","Data":"5a2e7bd281a7160f690d4865f29fc85f763b0708eb569c35ad3ae386f971d81d"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.457730 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" event={"ID":"bd547fda-024b-4cad-bbfd-f82f5fbd5859","Type":"ContainerStarted","Data":"961af1ac3571edb3c1c784b14676f05c340f9b21860584d22e177890e76e5a5a"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.463477 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v2gm9" event={"ID":"a8554d37-40ae-41ef-bed9-7c79b3f8083e","Type":"ContainerStarted","Data":"c1d0a8c902a7d79792d810a53613271354db3576d9931827d0551883f49d6138"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.463731 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.466013 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.466098 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.467602 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" event={"ID":"0e1caf27-b2b5-4cdf-b500-38d461b637c2","Type":"ContainerStarted","Data":"78bd66a5898dc2a20e74af0b58a663c13d2a7a6c4619b091f1d72adca9961bed"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.473711 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" event={"ID":"1411a454-22f4-4eef-828a-6a46c81c6c7e","Type":"ContainerStarted","Data":"1a501742f6afa4c0c3032ccc1daca53bd4edeb0d67be6757ccde8217afafe393"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.475001 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" event={"ID":"9cf9767c-0ec6-4db4-8ce8-b159703e0173","Type":"ContainerStarted","Data":"e2ff5ab07710bfa77c5fc51b36cbe26d18da75f518a27a45e49215f39d1abdf3"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.492530 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.492709 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.992681586 +0000 UTC m=+157.232126080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.492946 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.493351 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:36.993336453 +0000 UTC m=+157.232780947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.544142 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" event={"ID":"8c2915a8-d452-4234-94a7-f1ec68c95e4a","Type":"ContainerStarted","Data":"37da7d004c90d7ba18407767d276e7502b41bcd9b1baef13463472fc4e50d5dc"} Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.594660 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.594823 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.094793047 +0000 UTC m=+157.334237541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.595519 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.595872 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.095864275 +0000 UTC m=+157.335308769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.647579 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5zmps" podStartSLOduration=126.647547002 podStartE2EDuration="2m6.647547002s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:36.646373572 +0000 UTC m=+156.885818086" watchObservedRunningTime="2025-11-22 10:40:36.647547002 +0000 UTC m=+156.886991496" Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.697120 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.698156 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.19812821 +0000 UTC m=+157.437572704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.799894 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.800329 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.300312653 +0000 UTC m=+157.539757147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.901956 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.904220 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.404197471 +0000 UTC m=+157.643641965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:36 crc kubenswrapper[4772]: I1122 10:40:36.905847 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:36 crc kubenswrapper[4772]: E1122 10:40:36.906990 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.406980994 +0000 UTC m=+157.646425488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.008385 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.009516 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.509494016 +0000 UTC m=+157.748938510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.112216 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.112695 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.612675925 +0000 UTC m=+157.852120409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.161617 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.164669 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9vsnk"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.180720 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.182476 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jzjnj"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.204128 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-v2gm9" podStartSLOduration=127.204093735 podStartE2EDuration="2m7.204093735s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:37.202716919 +0000 UTC m=+157.442161433" watchObservedRunningTime="2025-11-22 10:40:37.204093735 +0000 UTC m=+157.443538229" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.217574 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.217720 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.717690112 +0000 UTC m=+157.957134596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.218268 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.218856 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.718840472 +0000 UTC m=+157.958284966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.287705 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ld9hg" podStartSLOduration=127.28767858 podStartE2EDuration="2m7.28767858s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:37.244203408 +0000 UTC m=+157.483647922" watchObservedRunningTime="2025-11-22 10:40:37.28767858 +0000 UTC m=+157.527123074" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.319460 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.319968 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.819708821 +0000 UTC m=+158.059153315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.320031 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.320736 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.820727498 +0000 UTC m=+158.060171992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.329554 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.331697 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.396798 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.423440 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.423645 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.923604199 +0000 UTC m=+158.163048693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.423780 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.423838 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.423872 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.423923 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.423978 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.425406 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:37.925386526 +0000 UTC m=+158.164831020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: W1122 10:40:37.427534 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad185e3d_1a10_4e09_9c56_41a4ae9a435c.slice/crio-245d19e89d3beec5157d4eb757602d6ba83b13c5fed1c113518d5dd834dd4dc8 WatchSource:0}: Error finding container 245d19e89d3beec5157d4eb757602d6ba83b13c5fed1c113518d5dd834dd4dc8: Status 404 returned error can't find the container with id 245d19e89d3beec5157d4eb757602d6ba83b13c5fed1c113518d5dd834dd4dc8 Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.429744 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vhvzn"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.433892 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.433975 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.434159 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.434315 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:37 crc kubenswrapper[4772]: W1122 10:40:37.436855 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90789491_84d5_4454_ba6c_9b55634b5c74.slice/crio-a4b4eb083e0b87c6c1a912592a2bb849d18efe9ce430f3fdc0a6681f331feba6 WatchSource:0}: Error finding container a4b4eb083e0b87c6c1a912592a2bb849d18efe9ce430f3fdc0a6681f331feba6: Status 404 returned error can't find the container with id a4b4eb083e0b87c6c1a912592a2bb849d18efe9ce430f3fdc0a6681f331feba6 Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.446104 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.486137 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.491941 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.494566 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8z66s"] Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.531614 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:37 crc kubenswrapper[4772]: E1122 10:40:37.531941 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:38.031922553 +0000 UTC m=+158.271367037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.555916 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-km77q" event={"ID":"d5de78fd-e1a1-4ec1-9767-336aebc1d19e","Type":"ContainerStarted","Data":"31644aac2d8b634f565c60e89f7613329216baa6b5d4349b3b9323d3ab009ac4"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.557514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" event={"ID":"f66fcaaf-2b15-40db-9e22-3a0d098d56f2","Type":"ContainerStarted","Data":"071cd341aa12c66c77201f504412a4d84b98e7ca5584b7c38058f739b94fc2b4"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.558502 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9vsnk" event={"ID":"ad185e3d-1a10-4e09-9c56-41a4ae9a435c","Type":"ContainerStarted","Data":"245d19e89d3beec5157d4eb757602d6ba83b13c5fed1c113518d5dd834dd4dc8"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.559129 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" event={"ID":"a81453df-6689-461b-8b0f-389b50452f08","Type":"ContainerStarted","Data":"8e8a15e5c4b6b6217c965aa60dc7d5bfdc64b018347a213178667440ed76f361"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.559728 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" event={"ID":"df5917cd-29a5-4e07-b030-24456d6b0da6","Type":"ContainerStarted","Data":"7cb39224eb20363b148a288c781ab9b763227362fa61969c836e70eb6bf0e4c1"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.560258 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" event={"ID":"f0d00201-7f70-493c-a471-e319513076b3","Type":"ContainerStarted","Data":"4b70ed975638313aa65492d59c596339072677cb3fa281fad97f96ccbbca60f1"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.560761 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" event={"ID":"17ee7fc7-887d-4bf4-b408-d1a723605bdc","Type":"ContainerStarted","Data":"98da8e53a8c53c6e4e8b94ec35b8fcf3b6913065f55b2b2f75e8db3c68d86aa0"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.561279 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" event={"ID":"ca10904e-f1cc-40f0-ba83-f4711606e7f3","Type":"ContainerStarted","Data":"cb238ff457ea9f195a7ca3533aa9d40a110d7aa5e4c86bd4109e5776345d2773"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.561834 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2g4sz" event={"ID":"b706176c-178e-48c8-94eb-faa069d602cb","Type":"ContainerStarted","Data":"8bd450bd84487e07629400b49856b2f4c6eb3eed0052f2cb6e5d67a2b476987c"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.562382 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vhvzn" event={"ID":"30a3f834-9426-4bd0-908e-4974c60576ff","Type":"ContainerStarted","Data":"dacfa3cf13f9b126bc8dc9b478a2853c95807116253d3fc4975f3ebdeca6760b"} Nov 22 10:40:37 crc kubenswrapper[4772]: I1122 10:40:37.565078 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" event={"ID":"90789491-84d5-4454-ba6c-9b55634b5c74","Type":"ContainerStarted","Data":"a4b4eb083e0b87c6c1a912592a2bb849d18efe9ce430f3fdc0a6681f331feba6"} Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.201607 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.203849 4772 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xmqs2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.203954 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" podUID="7e6495dc-3c26-45e6-af62-a4957488ae51" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.205324 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.207694 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.208639 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.221189 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.221364 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.221466 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.221512 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.221478618 +0000 UTC m=+159.460923112 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.221672 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.223777 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:38.722755691 +0000 UTC m=+158.962200175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.235087 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87"] Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.238364 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" podStartSLOduration=127.23832834 podStartE2EDuration="2m7.23832834s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:38.232675922 +0000 UTC m=+158.472120436" watchObservedRunningTime="2025-11-22 10:40:38.23832834 +0000 UTC m=+158.477772834" Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.257826 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n6h5" podStartSLOduration=128.257789771 podStartE2EDuration="2m8.257789771s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:38.250530651 +0000 UTC m=+158.489975165" watchObservedRunningTime="2025-11-22 10:40:38.257789771 +0000 UTC m=+158.497234265" Nov 22 10:40:38 crc kubenswrapper[4772]: W1122 10:40:38.322870 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1c56d4b_3431_4052_b883_6019a008c9aa.slice/crio-f73abb19131fdc59e152e66a6e154cda75afff346dd37540e7ea1a36fd53b1c9 WatchSource:0}: Error finding container f73abb19131fdc59e152e66a6e154cda75afff346dd37540e7ea1a36fd53b1c9: Status 404 returned error can't find the container with id f73abb19131fdc59e152e66a6e154cda75afff346dd37540e7ea1a36fd53b1c9 Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.323162 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.323515 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:38.823469966 +0000 UTC m=+159.062914460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.324139 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.327797 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:38.827782989 +0000 UTC m=+159.067227483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.427327 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.427839 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:38.927815546 +0000 UTC m=+159.167260040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.529488 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.529879 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.029859294 +0000 UTC m=+159.269303788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.569173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" event={"ID":"bea9575a-d7c4-4aaa-bc01-eaee90317eea","Type":"ContainerStarted","Data":"bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca"} Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.569893 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" event={"ID":"c1c56d4b-3431-4052-b883-6019a008c9aa","Type":"ContainerStarted","Data":"f73abb19131fdc59e152e66a6e154cda75afff346dd37540e7ea1a36fd53b1c9"} Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.570972 4772 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xmqs2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.571019 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" podUID="7e6495dc-3c26-45e6-af62-a4957488ae51" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.630838 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.631633 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.131604045 +0000 UTC m=+159.371048539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: W1122 10:40:38.693632 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda26a8d34_6b49_4019_b262_7f8e6fddc433.slice/crio-edbdb6c60669bc79817b45fab3532113a0c9b199345d1764b938d550cfd6beb3 WatchSource:0}: Error finding container edbdb6c60669bc79817b45fab3532113a0c9b199345d1764b938d550cfd6beb3: Status 404 returned error can't find the container with id edbdb6c60669bc79817b45fab3532113a0c9b199345d1764b938d550cfd6beb3 Nov 22 10:40:38 crc kubenswrapper[4772]: W1122 10:40:38.708702 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1d4b5fd_7de5_410f_b913_6fbb8e1c09b4.slice/crio-2f23a539d93d656adb1a67b8fae22cae58f0f8301b8582eebe33a9bcc5426e0c WatchSource:0}: Error finding container 2f23a539d93d656adb1a67b8fae22cae58f0f8301b8582eebe33a9bcc5426e0c: Status 404 returned error can't find the container with id 2f23a539d93d656adb1a67b8fae22cae58f0f8301b8582eebe33a9bcc5426e0c Nov 22 10:40:38 crc kubenswrapper[4772]: W1122 10:40:38.716760 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74f425d7_1955_455d_88ee_37ffccfb8c8c.slice/crio-b137241a7b6251f31a5939732804f35548fea05f27e5d54d8645d2f088f5342a WatchSource:0}: Error finding container b137241a7b6251f31a5939732804f35548fea05f27e5d54d8645d2f088f5342a: Status 404 returned error can't find the container with id b137241a7b6251f31a5939732804f35548fea05f27e5d54d8645d2f088f5342a Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.733341 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.733954 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.233922132 +0000 UTC m=+159.473366826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.834465 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.834765 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.334727599 +0000 UTC m=+159.574172093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.835194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.835644 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.335629822 +0000 UTC m=+159.575074316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.936560 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.936850 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.436804799 +0000 UTC m=+159.676249303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:38 crc kubenswrapper[4772]: I1122 10:40:38.936981 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:38 crc kubenswrapper[4772]: E1122 10:40:38.937500 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.437473126 +0000 UTC m=+159.676917630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.040020 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.040285 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.540242345 +0000 UTC m=+159.779686839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.141737 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.142202 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.642182611 +0000 UTC m=+159.881627105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.243295 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.243549 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.743512282 +0000 UTC m=+159.982956776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.243991 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.244628 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.74459702 +0000 UTC m=+159.984041684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.345108 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.345364 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.845321625 +0000 UTC m=+160.084766119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.346116 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.346569 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.846560387 +0000 UTC m=+160.086004881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: W1122 10:40:39.363608 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-b37fae484f45cbe86bf396c6ca00550d81b1b6571aca63468b6454673031c43b WatchSource:0}: Error finding container b37fae484f45cbe86bf396c6ca00550d81b1b6571aca63468b6454673031c43b: Status 404 returned error can't find the container with id b37fae484f45cbe86bf396c6ca00550d81b1b6571aca63468b6454673031c43b Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.447577 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.448079 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:39.948024361 +0000 UTC m=+160.187468995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.551264 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.551779 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.051758055 +0000 UTC m=+160.291202559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.596477 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" event={"ID":"74f425d7-1955-455d-88ee-37ffccfb8c8c","Type":"ContainerStarted","Data":"b137241a7b6251f31a5939732804f35548fea05f27e5d54d8645d2f088f5342a"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.600373 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" event={"ID":"ed063d5e-19cb-42cb-89fb-21b3b751f53e","Type":"ContainerStarted","Data":"afee24d77e961975b1b17ba817995cb880e9880a0c39c720fcaae79e14479d78"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.601922 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b37fae484f45cbe86bf396c6ca00550d81b1b6571aca63468b6454673031c43b"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.603532 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" event={"ID":"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4","Type":"ContainerStarted","Data":"2f23a539d93d656adb1a67b8fae22cae58f0f8301b8582eebe33a9bcc5426e0c"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.605314 4772 generic.go:334] "Generic (PLEG): container finished" podID="e0f99cb1-427f-4992-8c9b-15c285f13189" containerID="a393a9f329b608d3c5283a31afdfb84abd0521d79460e8dbd8454613ed564380" exitCode=0 Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.605380 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" event={"ID":"e0f99cb1-427f-4992-8c9b-15c285f13189","Type":"ContainerDied","Data":"a393a9f329b608d3c5283a31afdfb84abd0521d79460e8dbd8454613ed564380"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.607176 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" event={"ID":"7578c60c-84c2-4dd5-a6c5-576606438ede","Type":"ContainerStarted","Data":"c1c1ab4861c4f140d3f36a286be2ead843467395107e0c4a424dd622f068654e"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.609566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wc24s" event={"ID":"8787d05b-35f8-4862-9b4f-53e18d3b56ef","Type":"ContainerStarted","Data":"b5414f4705780db9228c518b957e2d1c90f7f025da22748cda7fc2b6540bf26a"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.610725 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" event={"ID":"a26a8d34-6b49-4019-b262-7f8e6fddc433","Type":"ContainerStarted","Data":"edbdb6c60669bc79817b45fab3532113a0c9b199345d1764b938d550cfd6beb3"} Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.652679 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.653171 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.153116626 +0000 UTC m=+160.392561130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.755231 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.755766 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.255739461 +0000 UTC m=+160.495183955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.856891 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.857232 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.357150803 +0000 UTC m=+160.596595317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.857430 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.857855 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.357841682 +0000 UTC m=+160.597286166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: W1122 10:40:39.881466 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-abdcd6d64d0695ae127aab940d744af5a5ad03398ec565d87862586412304101 WatchSource:0}: Error finding container abdcd6d64d0695ae127aab940d744af5a5ad03398ec565d87862586412304101: Status 404 returned error can't find the container with id abdcd6d64d0695ae127aab940d744af5a5ad03398ec565d87862586412304101 Nov 22 10:40:39 crc kubenswrapper[4772]: W1122 10:40:39.884136 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-728890f130265e20a6cdb00e73e5512522e937a372b3a0558d09ebe0bf40268a WatchSource:0}: Error finding container 728890f130265e20a6cdb00e73e5512522e937a372b3a0558d09ebe0bf40268a: Status 404 returned error can't find the container with id 728890f130265e20a6cdb00e73e5512522e937a372b3a0558d09ebe0bf40268a Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.959470 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.959721 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.459685486 +0000 UTC m=+160.699129980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:39 crc kubenswrapper[4772]: I1122 10:40:39.959903 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:39 crc kubenswrapper[4772]: E1122 10:40:39.960354 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.460345723 +0000 UTC m=+160.699790217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.061342 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.061610 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.561570191 +0000 UTC m=+160.801014695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.061848 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.062460 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.562447204 +0000 UTC m=+160.801891708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.163073 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.163386 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.663339713 +0000 UTC m=+160.902784257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.264937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.265332 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.76531455 +0000 UTC m=+161.004759044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.365959 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.366317 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.866260651 +0000 UTC m=+161.105705195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.367217 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.367825 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.867700348 +0000 UTC m=+161.107144842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.468565 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.468930 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.968899216 +0000 UTC m=+161.208343710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.469107 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.469473 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:40.96946063 +0000 UTC m=+161.208905124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.570839 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.571479 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.071403187 +0000 UTC m=+161.310847681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.571682 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.572295 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.07227448 +0000 UTC m=+161.311718974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.617806 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" event={"ID":"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2","Type":"ContainerStarted","Data":"2342cacc3176a3ee95f367cbd0fa637a1e8c671c514e24ad0eab3ac8c919bd24"} Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.619692 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" event={"ID":"f529668b-54db-49e7-92cb-c3cf6b986dce","Type":"ContainerStarted","Data":"a5aa90fe15fffa4257251c0fe68b579bea6ef7142980e92269afb5f6770f11c5"} Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.621249 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" event={"ID":"9cf9767c-0ec6-4db4-8ce8-b159703e0173","Type":"ContainerStarted","Data":"8524e6e1e5a08fd488e044eceb94d41cfc0a2cc2bf142fa5b96fdc22eb08a173"} Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.622426 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" event={"ID":"e5a02044-ee67-480e-9cc9-22cf07bc9388","Type":"ContainerStarted","Data":"0af7e8d03ab4b5e8370014f8c0f97879cc302fb577069c84dad93fbaaf9f9312"} Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.623264 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"728890f130265e20a6cdb00e73e5512522e937a372b3a0558d09ebe0bf40268a"} Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.624073 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"abdcd6d64d0695ae127aab940d744af5a5ad03398ec565d87862586412304101"} Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.673160 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.673382 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.173344314 +0000 UTC m=+161.412788808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.673918 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.674387 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.174379031 +0000 UTC m=+161.413823515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.775199 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.775351 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.275328971 +0000 UTC m=+161.514773465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.775381 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.775896 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.275886386 +0000 UTC m=+161.515330870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.876124 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.876552 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.376523198 +0000 UTC m=+161.615967702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:40 crc kubenswrapper[4772]: I1122 10:40:40.978521 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:40 crc kubenswrapper[4772]: E1122 10:40:40.979086 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.479065211 +0000 UTC m=+161.718509705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.081720 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.082411 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.582381993 +0000 UTC m=+161.821826527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.183774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.184466 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.684433073 +0000 UTC m=+161.923877717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.284295 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.284532 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.78449358 +0000 UTC m=+162.023938074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.284624 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.285445 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.785415984 +0000 UTC m=+162.024860668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.386002 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.386189 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.886165659 +0000 UTC m=+162.125610143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.386391 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.386770 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.886761885 +0000 UTC m=+162.126206379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.487751 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.487950 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.987917101 +0000 UTC m=+162.227361595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.488271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.488640 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:41.98863132 +0000 UTC m=+162.228075814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.589249 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.589420 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.089385515 +0000 UTC m=+162.328830009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.589760 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.590520 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.090456863 +0000 UTC m=+162.329901357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.631217 4772 generic.go:334] "Generic (PLEG): container finished" podID="e5a02044-ee67-480e-9cc9-22cf07bc9388" containerID="0af7e8d03ab4b5e8370014f8c0f97879cc302fb577069c84dad93fbaaf9f9312" exitCode=0 Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.631314 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" event={"ID":"e5a02044-ee67-480e-9cc9-22cf07bc9388","Type":"ContainerDied","Data":"0af7e8d03ab4b5e8370014f8c0f97879cc302fb577069c84dad93fbaaf9f9312"} Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.633544 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" event={"ID":"6f4a455f-fdda-46bf-bebf-67f1b83863c8","Type":"ContainerStarted","Data":"f93bb07e6d8aa18ec6e3a144137749d8027086d849b3879bfebd19575316a414"} Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.635030 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" event={"ID":"b94cb323-98c8-4c8c-ac25-48a70275b4ed","Type":"ContainerStarted","Data":"34679a442bb6851714370230a36ce7bc5b3835ea15172f13a576ba88f348c9a3"} Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.636449 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" event={"ID":"1cd04192-f2a8-4fbd-972a-c74fc6291b66","Type":"ContainerStarted","Data":"52ee07059dc2125cb772b08efb84a769338c378d83373ea9c48ea93e688b34ca"} Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.637871 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" event={"ID":"0e1caf27-b2b5-4cdf-b500-38d461b637c2","Type":"ContainerStarted","Data":"3c3e9ddf6034fe6caadb1eab66c5d867624228c62e9f6d3d5bbdb14f96196d6f"} Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.640068 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" event={"ID":"b1e29324-c165-4f14-8cf4-f8f62522a87e","Type":"ContainerStarted","Data":"7abb7aef9ba7946e7c848ff279a1d30efe24021a6ae6dd2f8eb9a0a6c32f4ff6"} Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.642299 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" event={"ID":"8c2915a8-d452-4234-94a7-f1ec68c95e4a","Type":"ContainerStarted","Data":"c5f21e14bd85e704a333e9068caffa11065a4c6d19eba01eb57cfb7f15b27c94"} Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.691031 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.691253 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.191226539 +0000 UTC m=+162.430671033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.691872 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.692281 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.192273807 +0000 UTC m=+162.431718301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.792843 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.793169 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.293134285 +0000 UTC m=+162.532578779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.793617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.794131 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.294120811 +0000 UTC m=+162.533565305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.896136 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.897361 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.39729724 +0000 UTC m=+162.636741734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.897491 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.897833 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.397823224 +0000 UTC m=+162.637267718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:41 crc kubenswrapper[4772]: I1122 10:40:41.998782 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:41 crc kubenswrapper[4772]: E1122 10:40:41.999359 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.499339469 +0000 UTC m=+162.738783963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.100208 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.100789 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.600761731 +0000 UTC m=+162.840206405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.202212 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.202407 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.702365809 +0000 UTC m=+162.941810303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.202471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.202857 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.702840691 +0000 UTC m=+162.942285185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.303873 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.304339 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.804315726 +0000 UTC m=+163.043760220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.405531 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.406039 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:42.906018066 +0000 UTC m=+163.145462550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.507714 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.507893 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.00786476 +0000 UTC m=+163.247309254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.508152 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.508559 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.008550068 +0000 UTC m=+163.247994562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.609647 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.609834 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.109809457 +0000 UTC m=+163.349253951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.609994 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.610332 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.110324701 +0000 UTC m=+163.349769195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.648239 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" event={"ID":"bd547fda-024b-4cad-bbfd-f82f5fbd5859","Type":"ContainerStarted","Data":"45ac09693c881e16f43542fa73a5b1efd7fbfeec3376cdb381c11d700ff35542"} Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.649967 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9vsnk" event={"ID":"ad185e3d-1a10-4e09-9c56-41a4ae9a435c","Type":"ContainerStarted","Data":"7678a9a724189a43813ee54b49a794c7ca11fda49d28715feac8557c2545a2b5"} Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.651325 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" event={"ID":"a81453df-6689-461b-8b0f-389b50452f08","Type":"ContainerStarted","Data":"6cff98d6fa7b093dfd2cc3cc779e502867d9671039654d66f6980866f2d9c156"} Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.652299 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-km77q" event={"ID":"d5de78fd-e1a1-4ec1-9767-336aebc1d19e","Type":"ContainerStarted","Data":"43012f8125fae2f28480d3f4a7651311e6d52e5dd8cf75f84806d69b7eca8031"} Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.653324 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" event={"ID":"7b9b1e33-9a6a-4d0f-af54-5589be658a3a","Type":"ContainerStarted","Data":"c010b65bef12211eda87b38bf68d5fac177ff447dd7b39899710ddc58bd1b3f8"} Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.711470 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.712332 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.211722893 +0000 UTC m=+163.451167387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.713457 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.713961 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.213930561 +0000 UTC m=+163.453375075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.815127 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.815704 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.315680962 +0000 UTC m=+163.555125446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:42 crc kubenswrapper[4772]: I1122 10:40:42.917489 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:42 crc kubenswrapper[4772]: E1122 10:40:42.917941 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.417916826 +0000 UTC m=+163.657361320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.018623 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.018816 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.518782555 +0000 UTC m=+163.758227109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.120723 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.121241 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.621228185 +0000 UTC m=+163.860672679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.222456 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.222660 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.722608807 +0000 UTC m=+163.962053301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.223417 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.223830 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.723819939 +0000 UTC m=+163.963264623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.324755 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.325855 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.825829397 +0000 UTC m=+164.065273891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.427865 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.428329 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:43.928309588 +0000 UTC m=+164.167754082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.528839 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.529084 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.029031893 +0000 UTC m=+164.268476387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.529457 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.529858 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.029848394 +0000 UTC m=+164.269292888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.630174 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.630369 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.130341113 +0000 UTC m=+164.369785607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.630442 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.630745 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.130732423 +0000 UTC m=+164.370176917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.660485 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" event={"ID":"ca10904e-f1cc-40f0-ba83-f4711606e7f3","Type":"ContainerStarted","Data":"85c802ce01484c824b35d94024af07fc22a13ffd26200cdf794c0d5e432d2c49"} Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.662681 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2g4sz" event={"ID":"b706176c-178e-48c8-94eb-faa069d602cb","Type":"ContainerStarted","Data":"78625a3bf90302dae5f85c77f0526cc1d373edb9349ecc1368a9ad8723c00b9d"} Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.664108 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" event={"ID":"b3099692-2c12-4f17-b161-1c17e9f13aed","Type":"ContainerStarted","Data":"51331bd6dc5332cde8a6b8fd2d35e8958bdc4b3c812faf7490146eee84477536"} Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.664436 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.666786 4772 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vvs55 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.666837 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.689582 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b5ph7" podStartSLOduration=132.689550297 podStartE2EDuration="2m12.689550297s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:43.685968403 +0000 UTC m=+163.925412917" watchObservedRunningTime="2025-11-22 10:40:43.689550297 +0000 UTC m=+163.928994791" Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.727366 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" podStartSLOduration=132.7273381 podStartE2EDuration="2m12.7273381s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:43.70757195 +0000 UTC m=+163.947016444" watchObservedRunningTime="2025-11-22 10:40:43.7273381 +0000 UTC m=+163.966782594" Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.728799 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ld7g2" podStartSLOduration=133.728791888 podStartE2EDuration="2m13.728791888s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:43.724715891 +0000 UTC m=+163.964160405" watchObservedRunningTime="2025-11-22 10:40:43.728791888 +0000 UTC m=+163.968236382" Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.731318 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.731569 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.231527219 +0000 UTC m=+164.470971713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.734280 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.736411 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.235036882 +0000 UTC m=+164.474481396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.836421 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.836868 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.336826294 +0000 UTC m=+164.576270788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.837113 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.837629 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.337611705 +0000 UTC m=+164.577056199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.938455 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.938667 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.438630217 +0000 UTC m=+164.678074711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:43 crc kubenswrapper[4772]: I1122 10:40:43.938821 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:43 crc kubenswrapper[4772]: E1122 10:40:43.939309 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.439296685 +0000 UTC m=+164.678741379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.039840 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.040276 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.540235025 +0000 UTC m=+164.779679519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.040662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.040987 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.540973844 +0000 UTC m=+164.780418338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.142816 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.143113 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.643068115 +0000 UTC m=+164.882512739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.244945 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.245406 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.745383081 +0000 UTC m=+164.984827575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.347025 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.347265 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.847227286 +0000 UTC m=+165.086671780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.347572 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.347993 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.847976715 +0000 UTC m=+165.087421209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.448373 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.448625 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.948584097 +0000 UTC m=+165.188028611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.449243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.449728 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:44.949714766 +0000 UTC m=+165.189159460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.463294 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.464158 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.466039 4772 patch_prober.go:28] interesting pod/console-f9d7485db-ld9hg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.466133 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ld9hg" podUID="614def41-0349-470c-afca-e5c335fa8834" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.550728 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.553227 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.053189233 +0000 UTC m=+165.292633867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.652923 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.653445 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.153428225 +0000 UTC m=+165.392872719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.671315 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"61880ca93dec048e0b91221b7681801e54363cabb95b87784268c476138abc5a"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.673136 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" event={"ID":"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4","Type":"ContainerStarted","Data":"31bc211aa333eb3489d88b1c16a6e3e66fa4bae88bfd3fe01a1a1e0a32b7f263"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.675529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" event={"ID":"ccd507b9-7746-46ee-8bd4-1134cc290f67","Type":"ContainerStarted","Data":"5ada888ab5c53d88b75356201b50198288f2718e8f584c6b2259f0228002f7ea"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.676888 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" event={"ID":"f0d00201-7f70-493c-a471-e319513076b3","Type":"ContainerStarted","Data":"9e5b1fa8d4129cef6685068ff4c2452d1a372305e429a26ca0a70a7c05b75edf"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.678136 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" event={"ID":"74f425d7-1955-455d-88ee-37ffccfb8c8c","Type":"ContainerStarted","Data":"a412464f8e1c0705979090679670ae59491c367a69bc39f6396a9f8022e95bc6"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.679414 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" event={"ID":"c1c56d4b-3431-4052-b883-6019a008c9aa","Type":"ContainerStarted","Data":"f35e8bac9edc339e5379d36d60ea5448e7371350be7b4900960cdd2a601786e5"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.681606 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" event={"ID":"90789491-84d5-4454-ba6c-9b55634b5c74","Type":"ContainerStarted","Data":"163255d138e7dcb4009c81f263ce1dc907c6f6c9371eb215bc3eedeb78b1952a"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.683185 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"24d73081cb0caed3ad75e3ccb01cc6cbeffa4ce1a093ca4f2459449e8ab7cd62"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.684854 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" event={"ID":"1411a454-22f4-4eef-828a-6a46c81c6c7e","Type":"ContainerStarted","Data":"b3bde07583bf7737b3fe83c13f878e9c9e81d6fbb87db7eeb7a38fb16a79054e"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.686276 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" event={"ID":"a26a8d34-6b49-4019-b262-7f8e6fddc433","Type":"ContainerStarted","Data":"074a635e0e84c244efa42e3bd0f98c0528cba25e7ac8074d0b8ab19ef7149f8f"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.687872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e87c74e29ddde5453243c21f9e51a06ac6abbf1a9d247ff17b6c319cefe14692"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.689517 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vhvzn" event={"ID":"30a3f834-9426-4bd0-908e-4974c60576ff","Type":"ContainerStarted","Data":"8618346c0436196278bbb0bfc612204a7dc7f981a92922d4885a0dc5e50c99a6"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.690978 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" event={"ID":"76de3e09-61e2-4240-aadb-86c8eaa622f3","Type":"ContainerStarted","Data":"9329decec5ed7fd0651f6f64df9fae53337a5f95049035ddb98e22ae3cacce3d"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.692908 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" event={"ID":"df5917cd-29a5-4e07-b030-24456d6b0da6","Type":"ContainerStarted","Data":"e9b2f22fc821099bb5a943fb9f80c140dab566532049aa206589d320342d7298"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.694923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" event={"ID":"f66fcaaf-2b15-40db-9e22-3a0d098d56f2","Type":"ContainerStarted","Data":"912e58417d3436bbb350bd619c650e491bf0ad2f40f01d1811e1db1a3897ac9d"} Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.697098 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.697099 4772 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vvs55 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.697172 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.697655 4772 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7gdmn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.697909 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.705749 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.719666 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wc24s" podStartSLOduration=133.719643034 podStartE2EDuration="2m13.719643034s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.716538062 +0000 UTC m=+164.955982586" watchObservedRunningTime="2025-11-22 10:40:44.719643034 +0000 UTC m=+164.959087528" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.734731 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m9s4q" podStartSLOduration=133.73471507 podStartE2EDuration="2m13.73471507s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.732329327 +0000 UTC m=+164.971773821" watchObservedRunningTime="2025-11-22 10:40:44.73471507 +0000 UTC m=+164.974159564" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.735975 4772 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vvs55 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.736022 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.757124 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.757215 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.763202 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.763386 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.764876 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.765200 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.265159999 +0000 UTC m=+165.504604493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.766061 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.772669 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.272648845 +0000 UTC m=+165.512093339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.783206 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7wspp" podStartSLOduration=133.783181102 podStartE2EDuration="2m13.783181102s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.782254878 +0000 UTC m=+165.021699362" watchObservedRunningTime="2025-11-22 10:40:44.783181102 +0000 UTC m=+165.022625596" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.821218 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" podStartSLOduration=134.82119654 podStartE2EDuration="2m14.82119654s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.820019079 +0000 UTC m=+165.059463593" watchObservedRunningTime="2025-11-22 10:40:44.82119654 +0000 UTC m=+165.060641034" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.847418 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sgkwc" podStartSLOduration=133.847396898 podStartE2EDuration="2m13.847396898s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.846821493 +0000 UTC m=+165.086266007" watchObservedRunningTime="2025-11-22 10:40:44.847396898 +0000 UTC m=+165.086841402" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.867060 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.867274 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.367234159 +0000 UTC m=+165.606678653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.867334 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.867755 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.367739012 +0000 UTC m=+165.607183506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.870898 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nnvcq" podStartSLOduration=133.870857994 podStartE2EDuration="2m13.870857994s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.870403582 +0000 UTC m=+165.109848076" watchObservedRunningTime="2025-11-22 10:40:44.870857994 +0000 UTC m=+165.110302488" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.892704 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lcxp" podStartSLOduration=133.892682107 podStartE2EDuration="2m13.892682107s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.890717846 +0000 UTC m=+165.130162340" watchObservedRunningTime="2025-11-22 10:40:44.892682107 +0000 UTC m=+165.132126601" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.897125 4772 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7gdmn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.897175 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.968379 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t76z9" podStartSLOduration=133.968345654 podStartE2EDuration="2m13.968345654s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:44.959856331 +0000 UTC m=+165.199300825" watchObservedRunningTime="2025-11-22 10:40:44.968345654 +0000 UTC m=+165.207790148" Nov 22 10:40:44 crc kubenswrapper[4772]: I1122 10:40:44.969255 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:44 crc kubenswrapper[4772]: E1122 10:40:44.969544 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.469503624 +0000 UTC m=+165.708948118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.071705 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.072440 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.572420896 +0000 UTC m=+165.811865390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.173338 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.173600 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.673575142 +0000 UTC m=+165.913019636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.173924 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.174668 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.67463836 +0000 UTC m=+165.914083024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.243422 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.244215 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.246104 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.246222 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.275123 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.275625 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.775569 +0000 UTC m=+166.015013514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.377270 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.377700 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.877681581 +0000 UTC m=+166.117126075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.478711 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.478953 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.978911789 +0000 UTC m=+166.218356283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.479118 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.479601 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:45.979591247 +0000 UTC m=+166.219035741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.580736 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.581022 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.080980349 +0000 UTC m=+166.320424843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.682689 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.683226 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.183200152 +0000 UTC m=+166.422644646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.705020 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" event={"ID":"a81453df-6689-461b-8b0f-389b50452f08","Type":"ContainerStarted","Data":"d84220962f98881570b7aeeff979638f2c158cab65267aacc50695781d7d5773"} Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.708334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" event={"ID":"e0f99cb1-427f-4992-8c9b-15c285f13189","Type":"ContainerStarted","Data":"1e381541b58fa3df7b60fe9717a830b0e03be2e28e5f944826b47ceffa11edf0"} Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.710716 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9vsnk" event={"ID":"ad185e3d-1a10-4e09-9c56-41a4ae9a435c","Type":"ContainerStarted","Data":"f0d0ef53b8aeb9d45265cf5b89bc9c82b6a2a52b5a974f56e26adca5f2433a5d"} Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.711976 4772 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7gdmn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.712251 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.712768 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.713659 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8z66s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.713723 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" podUID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.739684 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.742023 4772 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hn48r container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.742120 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" podUID="f0d00201-7f70-493c-a471-e319513076b3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.742033 4772 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hn48r container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.742223 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" podUID="f0d00201-7f70-493c-a471-e319513076b3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.775920 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v9s84" podStartSLOduration=137.775904686 podStartE2EDuration="2m17.775904686s" podCreationTimestamp="2025-11-22 10:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.774579291 +0000 UTC m=+166.014023795" watchObservedRunningTime="2025-11-22 10:40:45.775904686 +0000 UTC m=+166.015349180" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.777687 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" podStartSLOduration=135.777675423 podStartE2EDuration="2m15.777675423s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.737156649 +0000 UTC m=+165.976601143" watchObservedRunningTime="2025-11-22 10:40:45.777675423 +0000 UTC m=+166.017119917" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.788661 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.791441 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.29128241 +0000 UTC m=+166.530726904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.812040 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" podStartSLOduration=134.812008514 podStartE2EDuration="2m14.812008514s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.802354471 +0000 UTC m=+166.041798975" watchObservedRunningTime="2025-11-22 10:40:45.812008514 +0000 UTC m=+166.051453268" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.867692 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g7h5s" podStartSLOduration=134.867662565 podStartE2EDuration="2m14.867662565s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.865781246 +0000 UTC m=+166.105225750" watchObservedRunningTime="2025-11-22 10:40:45.867662565 +0000 UTC m=+166.107107049" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.886416 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" podStartSLOduration=135.886386987 podStartE2EDuration="2m15.886386987s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.884507268 +0000 UTC m=+166.123951762" watchObservedRunningTime="2025-11-22 10:40:45.886386987 +0000 UTC m=+166.125831491" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.890855 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.891316 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.391296786 +0000 UTC m=+166.630741270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.902485 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmbgm" podStartSLOduration=134.902457039 podStartE2EDuration="2m14.902457039s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.90095002 +0000 UTC m=+166.140394514" watchObservedRunningTime="2025-11-22 10:40:45.902457039 +0000 UTC m=+166.141901533" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.937173 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcjgh" podStartSLOduration=134.93715506 podStartE2EDuration="2m14.93715506s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.920514973 +0000 UTC m=+166.159959467" watchObservedRunningTime="2025-11-22 10:40:45.93715506 +0000 UTC m=+166.176599554" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.961930 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4j6f" podStartSLOduration=134.96190159 podStartE2EDuration="2m14.96190159s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.961274263 +0000 UTC m=+166.200718777" watchObservedRunningTime="2025-11-22 10:40:45.96190159 +0000 UTC m=+166.201346084" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.962419 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" podStartSLOduration=134.962413973 podStartE2EDuration="2m14.962413973s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.936325528 +0000 UTC m=+166.175770022" watchObservedRunningTime="2025-11-22 10:40:45.962413973 +0000 UTC m=+166.201858467" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.984600 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" podStartSLOduration=134.984574085 podStartE2EDuration="2m14.984574085s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:45.981247708 +0000 UTC m=+166.220692202" watchObservedRunningTime="2025-11-22 10:40:45.984574085 +0000 UTC m=+166.224018579" Nov 22 10:40:45 crc kubenswrapper[4772]: I1122 10:40:45.991827 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:45 crc kubenswrapper[4772]: E1122 10:40:45.992308 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.492288708 +0000 UTC m=+166.731733202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.014610 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.015873 4772 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lkn87 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.015897 4772 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lkn87 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.015933 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" podUID="74f425d7-1955-455d-88ee-37ffccfb8c8c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.015962 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" podUID="74f425d7-1955-455d-88ee-37ffccfb8c8c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.074267 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vhvzn" podStartSLOduration=14.074235389 podStartE2EDuration="14.074235389s" podCreationTimestamp="2025-11-22 10:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.073376037 +0000 UTC m=+166.312820531" watchObservedRunningTime="2025-11-22 10:40:46.074235389 +0000 UTC m=+166.313679893" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.089159 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2g4sz" podStartSLOduration=14.08912045 podStartE2EDuration="14.08912045s" podCreationTimestamp="2025-11-22 10:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.051512343 +0000 UTC m=+166.290956837" watchObservedRunningTime="2025-11-22 10:40:46.08912045 +0000 UTC m=+166.328564944" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.095420 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.095854 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.595833507 +0000 UTC m=+166.835278001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.106812 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" podStartSLOduration=135.106776634 podStartE2EDuration="2m15.106776634s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.101729891 +0000 UTC m=+166.341174385" watchObservedRunningTime="2025-11-22 10:40:46.106776634 +0000 UTC m=+166.346221128" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.197210 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.197635 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.697616589 +0000 UTC m=+166.937061083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.248158 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:46 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:46 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:46 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.248283 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.299143 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.299715 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.799691349 +0000 UTC m=+167.039135843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.400501 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.400957 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:46.900914887 +0000 UTC m=+167.140359391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.502300 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.502810 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.002787162 +0000 UTC m=+167.242231666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.604148 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.604268 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.104238086 +0000 UTC m=+167.343682580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.604374 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.604725 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.104717378 +0000 UTC m=+167.344161872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.627071 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.633348 4772 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5wvfr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.633375 4772 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5wvfr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.633415 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" podUID="ccd507b9-7746-46ee-8bd4-1134cc290f67" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.633452 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" podUID="ccd507b9-7746-46ee-8bd4-1134cc290f67" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.706343 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.206317746 +0000 UTC m=+167.445762240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.706209 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.706749 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.707182 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.207172168 +0000 UTC m=+167.446616662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.719113 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" event={"ID":"6f4a455f-fdda-46bf-bebf-67f1b83863c8","Type":"ContainerStarted","Data":"001f019c4fe84b85cba2b2784b8d2dca4425590bd67c621fde05a4e973c5425a"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.721719 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" event={"ID":"9d79dc27-4bfc-4318-a8ed-11ceb9cb81b2","Type":"ContainerStarted","Data":"d4b0d5a38199edd7e2b88edfb03fcb0e05d0929385aeefff4c1682a7c7dc54f7"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.724782 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" event={"ID":"e0f99cb1-427f-4992-8c9b-15c285f13189","Type":"ContainerStarted","Data":"42f066605740ad54a7aff52698aab010780a216356fba2274e48ba7c55ce8c6a"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.726584 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" event={"ID":"17ee7fc7-887d-4bf4-b408-d1a723605bdc","Type":"ContainerStarted","Data":"9e17d7422d7c61b8ef90e6ed2aac794e52a91cdaebcc2e4464641fa495f86b55"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.728978 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-km77q" event={"ID":"d5de78fd-e1a1-4ec1-9767-336aebc1d19e","Type":"ContainerStarted","Data":"ed3218e46c661a7f8b16f6f7ed28a089258d838cc76075f9bd88fbf92597962c"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.731701 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" event={"ID":"7b9b1e33-9a6a-4d0f-af54-5589be658a3a","Type":"ContainerStarted","Data":"75a800d4c4e7a05660e39b8138b05c7d8faac9afcad38aebe2642dbe691e064c"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.733884 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" event={"ID":"e5a02044-ee67-480e-9cc9-22cf07bc9388","Type":"ContainerStarted","Data":"7fadddd65771285739e956653977008f46faf0f7217bbafa2cd81228b07f6b91"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.735831 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" event={"ID":"bd547fda-024b-4cad-bbfd-f82f5fbd5859","Type":"ContainerStarted","Data":"61a61d3374fa348ec4ba67b8b40d11d9a2b4a4ad27dea5e72c1fee32c70d08c8"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.737919 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" event={"ID":"90789491-84d5-4454-ba6c-9b55634b5c74","Type":"ContainerStarted","Data":"c734c348b7a93c27e6c3ed88bc96624448e77f7f37862ed091620a7e3520cf24"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.740958 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" event={"ID":"ca10904e-f1cc-40f0-ba83-f4711606e7f3","Type":"ContainerStarted","Data":"30a8a1daadf13e60a587af4778973f33307efaa762ca3d949b317dea329b310d"} Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.741376 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.741754 4772 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5wvfr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.741776 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8z66s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.741822 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" podUID="ccd507b9-7746-46ee-8bd4-1134cc290f67" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.741859 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" podUID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.741780 4772 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lkn87 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.741934 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" podUID="74f425d7-1955-455d-88ee-37ffccfb8c8c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.742065 4772 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hn48r container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.742182 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" podUID="f0d00201-7f70-493c-a471-e319513076b3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.763277 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wbcgw" podStartSLOduration=136.763252571 podStartE2EDuration="2m16.763252571s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.761160536 +0000 UTC m=+167.000605060" watchObservedRunningTime="2025-11-22 10:40:46.763252571 +0000 UTC m=+167.002697065" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.808709 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.808943 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.308900859 +0000 UTC m=+167.548345353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.813634 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.814176 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.314158227 +0000 UTC m=+167.553602721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.834649 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lk87v" podStartSLOduration=135.834626815 podStartE2EDuration="2m15.834626815s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.791181074 +0000 UTC m=+167.030625568" watchObservedRunningTime="2025-11-22 10:40:46.834626815 +0000 UTC m=+167.074071309" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.871158 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" podStartSLOduration=136.871139493 podStartE2EDuration="2m16.871139493s" podCreationTimestamp="2025-11-22 10:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.841528816 +0000 UTC m=+167.080973320" watchObservedRunningTime="2025-11-22 10:40:46.871139493 +0000 UTC m=+167.110583987" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.871716 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kqhcv" podStartSLOduration=135.871709848 podStartE2EDuration="2m15.871709848s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.867692723 +0000 UTC m=+167.107137217" watchObservedRunningTime="2025-11-22 10:40:46.871709848 +0000 UTC m=+167.111154342" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.888639 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bgkbs" podStartSLOduration=135.888622722 podStartE2EDuration="2m15.888622722s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.885924122 +0000 UTC m=+167.125368606" watchObservedRunningTime="2025-11-22 10:40:46.888622722 +0000 UTC m=+167.128067216" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.915413 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.915555 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.415531209 +0000 UTC m=+167.654975703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.916136 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:46 crc kubenswrapper[4772]: E1122 10:40:46.916490 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.416479584 +0000 UTC m=+167.655924078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.947918 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" podStartSLOduration=135.947899519 podStartE2EDuration="2m15.947899519s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.920625223 +0000 UTC m=+167.160069707" watchObservedRunningTime="2025-11-22 10:40:46.947899519 +0000 UTC m=+167.187344013" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.950994 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" podStartSLOduration=135.95097955 podStartE2EDuration="2m15.95097955s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.948155896 +0000 UTC m=+167.187600390" watchObservedRunningTime="2025-11-22 10:40:46.95097955 +0000 UTC m=+167.190424034" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.972554 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vp9gx" podStartSLOduration=135.972534936 podStartE2EDuration="2m15.972534936s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.96963851 +0000 UTC m=+167.209083004" watchObservedRunningTime="2025-11-22 10:40:46.972534936 +0000 UTC m=+167.211979430" Nov 22 10:40:46 crc kubenswrapper[4772]: I1122 10:40:46.992205 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9vsnk" podStartSLOduration=14.992188912 podStartE2EDuration="14.992188912s" podCreationTimestamp="2025-11-22 10:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:46.990023765 +0000 UTC m=+167.229468269" watchObservedRunningTime="2025-11-22 10:40:46.992188912 +0000 UTC m=+167.231633406" Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.017348 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.017665 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.51763001 +0000 UTC m=+167.757074514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.018306 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-t8jgq" podStartSLOduration=136.018286117 podStartE2EDuration="2m16.018286117s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:47.015035412 +0000 UTC m=+167.254479906" watchObservedRunningTime="2025-11-22 10:40:47.018286117 +0000 UTC m=+167.257730611" Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.048913 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-km77q" podStartSLOduration=136.048896341 podStartE2EDuration="2m16.048896341s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:47.047490034 +0000 UTC m=+167.286934528" watchObservedRunningTime="2025-11-22 10:40:47.048896341 +0000 UTC m=+167.288340835" Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.118697 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.119101 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.619083294 +0000 UTC m=+167.858527788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.219440 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.219848 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.719831179 +0000 UTC m=+167.959275673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.251522 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:47 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:47 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:47 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.251650 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.321301 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.321616 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.821601811 +0000 UTC m=+168.061046305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.422144 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.422486 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.922415498 +0000 UTC m=+168.161859992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.422593 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.423084 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:47.923018814 +0000 UTC m=+168.162463448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.524924 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.525418 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.025376751 +0000 UTC m=+168.264821245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.525816 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.526346 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.026326876 +0000 UTC m=+168.265771370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.627880 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.628372 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.128338775 +0000 UTC m=+168.367783269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.729609 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.730127 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.230101597 +0000 UTC m=+168.469546091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.832980 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.833333 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.333287206 +0000 UTC m=+168.572731700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.833884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.835650 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.335630688 +0000 UTC m=+168.575075352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.935909 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.936134 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.436106366 +0000 UTC m=+168.675550860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:47 crc kubenswrapper[4772]: I1122 10:40:47.936182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:47 crc kubenswrapper[4772]: E1122 10:40:47.936539 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.436531867 +0000 UTC m=+168.675976361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.037914 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.038160 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.538126695 +0000 UTC m=+168.777571189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.038258 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.038573 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.538566116 +0000 UTC m=+168.778010610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.139621 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.139847 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.639806885 +0000 UTC m=+168.879251379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.140174 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.140523 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.640511213 +0000 UTC m=+168.879955707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.203951 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.241280 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.241443 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.741409482 +0000 UTC m=+168.980853976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.241658 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.241976 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.741968977 +0000 UTC m=+168.981413471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.247316 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:48 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:48 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:48 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.247568 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.343006 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.343310 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.843281307 +0000 UTC m=+169.082725801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.343601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.344075 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.844061958 +0000 UTC m=+169.083506452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.444563 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.444764 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.944733091 +0000 UTC m=+169.184177585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.445078 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.445377 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:48.945370537 +0000 UTC m=+169.184815031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.545945 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.546357 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.046329558 +0000 UTC m=+169.285774052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.647424 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.647794 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.147778272 +0000 UTC m=+169.387222766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.748738 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.748927 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.248894617 +0000 UTC m=+169.488339111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.749612 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.749962 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.249945944 +0000 UTC m=+169.489390438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.851062 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.851268 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.351247204 +0000 UTC m=+169.590691698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.851766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.852118 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.352107217 +0000 UTC m=+169.591551711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.953238 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.953522 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.453493069 +0000 UTC m=+169.692937563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:48 crc kubenswrapper[4772]: I1122 10:40:48.953855 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:48 crc kubenswrapper[4772]: E1122 10:40:48.954252 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.454239628 +0000 UTC m=+169.693684112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.054505 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.055217 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.555197719 +0000 UTC m=+169.794642213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.156086 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.156446 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.656428047 +0000 UTC m=+169.895872541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.247084 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:49 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:49 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:49 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.247132 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.257786 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.258237 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.758190208 +0000 UTC m=+169.997634712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.258387 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.258853 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.758838755 +0000 UTC m=+169.998283449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.359506 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.359970 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.859926729 +0000 UTC m=+170.099371223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.360708 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.361208 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.861187062 +0000 UTC m=+170.100631556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.462694 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.463353 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:49.963329384 +0000 UTC m=+170.202773878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.565126 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.565562 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.065543598 +0000 UTC m=+170.304988092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.620593 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.620697 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.622288 4772 patch_prober.go:28] interesting pod/apiserver-76f77b778f-56sqr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.622459 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" podUID="e0f99cb1-427f-4992-8c9b-15c285f13189" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.668670 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-59qml"] Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.669848 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.670569 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.670766 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.17073384 +0000 UTC m=+170.410178344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.671021 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.672070 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.172023894 +0000 UTC m=+170.411468598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.677385 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.704260 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5wvfr" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.711405 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59qml"] Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.719308 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.719380 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.771891 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.772350 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64fhp\" (UniqueName: \"kubernetes.io/projected/64157fec-2673-403c-96c5-2cbaf3ca17a2-kube-api-access-64fhp\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.772381 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-utilities\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.772493 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-catalog-content\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.773661 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.273642182 +0000 UTC m=+170.513086676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.782730 4772 generic.go:334] "Generic (PLEG): container finished" podID="df5917cd-29a5-4e07-b030-24456d6b0da6" containerID="e9b2f22fc821099bb5a943fb9f80c140dab566532049aa206589d320342d7298" exitCode=0 Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.782891 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" event={"ID":"df5917cd-29a5-4e07-b030-24456d6b0da6","Type":"ContainerDied","Data":"e9b2f22fc821099bb5a943fb9f80c140dab566532049aa206589d320342d7298"} Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.806715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" event={"ID":"17ee7fc7-887d-4bf4-b408-d1a723605bdc","Type":"ContainerStarted","Data":"8f8ec0bf41a339b5888922d7899900dd3c61ee83418a541007c9c5f52a6512b5"} Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.812400 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.813255 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.826961 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.827279 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.867738 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.875093 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.875169 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-catalog-content\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.875227 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.875271 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.875293 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64fhp\" (UniqueName: \"kubernetes.io/projected/64157fec-2673-403c-96c5-2cbaf3ca17a2-kube-api-access-64fhp\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.875322 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-utilities\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.875805 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-utilities\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.876111 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-catalog-content\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:49 crc kubenswrapper[4772]: E1122 10:40:49.876462 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.376445831 +0000 UTC m=+170.615890325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.880746 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5g5n6"] Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.901731 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.904988 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.946206 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5g5n6"] Nov 22 10:40:49 crc kubenswrapper[4772]: I1122 10:40:49.977225 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.014720 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.015129 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngkkc\" (UniqueName: \"kubernetes.io/projected/59d55861-971b-404f-9926-4d41f07f0880-kube-api-access-ngkkc\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.015260 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.015364 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-catalog-content\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.015482 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-utilities\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.015661 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.026244 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64fhp\" (UniqueName: \"kubernetes.io/projected/64157fec-2673-403c-96c5-2cbaf3ca17a2-kube-api-access-64fhp\") pod \"community-operators-59qml\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.026545 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.028070 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.528027381 +0000 UTC m=+170.767471875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.029216 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.036609 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.083675 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.084102 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.103196 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m47st"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.104861 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.116990 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.117100 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.117133 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.118333 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngkkc\" (UniqueName: \"kubernetes.io/projected/59d55861-971b-404f-9926-4d41f07f0880-kube-api-access-ngkkc\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.118381 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-catalog-content\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.118408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-utilities\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.118945 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-utilities\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.119327 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.619310388 +0000 UTC m=+170.858754882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.119998 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-catalog-content\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.135984 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.147744 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m47st"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.148426 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.149382 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.173287 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngkkc\" (UniqueName: \"kubernetes.io/projected/59d55861-971b-404f-9926-4d41f07f0880-kube-api-access-ngkkc\") pod \"certified-operators-5g5n6\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.173766 4772 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-5znz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]log ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]etcd ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]etcd-readiness ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 22 10:40:50 crc kubenswrapper[4772]: [-]informer-sync failed: reason withheld Nov 22 10:40:50 crc kubenswrapper[4772]: [+]poststarthook/generic-apiserver-start-informers ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]poststarthook/max-in-flight-filter ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-StartUserInformer ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-StartOAuthInformer ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Nov 22 10:40:50 crc kubenswrapper[4772]: [+]shutdown ok Nov 22 10:40:50 crc kubenswrapper[4772]: readyz check failed Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.173830 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" podUID="e5a02044-ee67-480e-9cc9-22cf07bc9388" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.220667 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.220959 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.221036 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc6gd\" (UniqueName: \"kubernetes.io/projected/1c4be727-3b88-4540-a5c9-9c6d19978537-kube-api-access-xc6gd\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.221153 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.221190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-utilities\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.221212 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-catalog-content\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.221330 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.721310356 +0000 UTC m=+170.960754850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.221381 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.244938 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n6b26"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.250895 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.252470 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.256514 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:50 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:50 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:50 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.256583 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.256743 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.265174 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6b26"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.303372 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.322379 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc6gd\" (UniqueName: \"kubernetes.io/projected/1c4be727-3b88-4540-a5c9-9c6d19978537-kube-api-access-xc6gd\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.322457 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-utilities\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.322511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-utilities\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.322535 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-catalog-content\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.322554 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-catalog-content\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.322572 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntj9v\" (UniqueName: \"kubernetes.io/projected/a99b0a8a-5c55-480c-8a9c-274f995658cb-kube-api-access-ntj9v\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.322607 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.323014 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.822995356 +0000 UTC m=+171.062439850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.326447 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-catalog-content\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.329561 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-utilities\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.372078 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc6gd\" (UniqueName: \"kubernetes.io/projected/1c4be727-3b88-4540-a5c9-9c6d19978537-kube-api-access-xc6gd\") pod \"community-operators-m47st\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.391652 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.424699 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.425092 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-catalog-content\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.425122 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntj9v\" (UniqueName: \"kubernetes.io/projected/a99b0a8a-5c55-480c-8a9c-274f995658cb-kube-api-access-ntj9v\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.425209 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-utilities\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.425710 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-utilities\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.425795 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:50.925776174 +0000 UTC m=+171.165220668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.426015 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-catalog-content\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.439447 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.455336 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntj9v\" (UniqueName: \"kubernetes.io/projected/a99b0a8a-5c55-480c-8a9c-274f995658cb-kube-api-access-ntj9v\") pod \"certified-operators-n6b26\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.526921 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.527439 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.027422623 +0000 UTC m=+171.266867117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.579588 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.605641 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.629442 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.629858 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.129837732 +0000 UTC m=+171.369282226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: W1122 10:40:50.640884 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9e7566ec_f873_4243_94e6_fe7c38cc07ae.slice/crio-1f536a6a8af3c579c7c5353d272f1e2cac1ce21dfb95228e103a59cd3673821a WatchSource:0}: Error finding container 1f536a6a8af3c579c7c5353d272f1e2cac1ce21dfb95228e103a59cd3673821a: Status 404 returned error can't find the container with id 1f536a6a8af3c579c7c5353d272f1e2cac1ce21dfb95228e103a59cd3673821a Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.731830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.732547 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.232517198 +0000 UTC m=+171.471961702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.814105 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5g5n6"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.831184 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9e7566ec-f873-4243-94e6-fe7c38cc07ae","Type":"ContainerStarted","Data":"1f536a6a8af3c579c7c5353d272f1e2cac1ce21dfb95228e103a59cd3673821a"} Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.833692 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.834199 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.334178777 +0000 UTC m=+171.573623271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.870222 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.937852 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:50 crc kubenswrapper[4772]: E1122 10:40:50.944925 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.444902125 +0000 UTC m=+171.684346609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:50 crc kubenswrapper[4772]: I1122 10:40:50.951489 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59qml"] Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.039575 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.041036 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.540993908 +0000 UTC m=+171.780438422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.112208 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m47st"] Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.145912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.146486 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.646466657 +0000 UTC m=+171.885911151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.247468 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.248155 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.748131546 +0000 UTC m=+171.987576040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.279357 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:51 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:51 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:51 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.279422 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.349730 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.350262 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.850245067 +0000 UTC m=+172.089689561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.365655 4772 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.376229 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.435406 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6b26"] Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.451528 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.451723 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.951691881 +0000 UTC m=+172.191136375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.452003 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.452497 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:51.952478652 +0000 UTC m=+172.191923146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.554977 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tcml\" (UniqueName: \"kubernetes.io/projected/df5917cd-29a5-4e07-b030-24456d6b0da6-kube-api-access-7tcml\") pod \"df5917cd-29a5-4e07-b030-24456d6b0da6\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.555126 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5917cd-29a5-4e07-b030-24456d6b0da6-secret-volume\") pod \"df5917cd-29a5-4e07-b030-24456d6b0da6\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.555280 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.555340 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5917cd-29a5-4e07-b030-24456d6b0da6-config-volume\") pod \"df5917cd-29a5-4e07-b030-24456d6b0da6\" (UID: \"df5917cd-29a5-4e07-b030-24456d6b0da6\") " Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.555693 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.055658651 +0000 UTC m=+172.295103335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.556154 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5917cd-29a5-4e07-b030-24456d6b0da6-config-volume" (OuterVolumeSpecName: "config-volume") pod "df5917cd-29a5-4e07-b030-24456d6b0da6" (UID: "df5917cd-29a5-4e07-b030-24456d6b0da6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.564416 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5917cd-29a5-4e07-b030-24456d6b0da6-kube-api-access-7tcml" (OuterVolumeSpecName: "kube-api-access-7tcml") pod "df5917cd-29a5-4e07-b030-24456d6b0da6" (UID: "df5917cd-29a5-4e07-b030-24456d6b0da6"). InnerVolumeSpecName "kube-api-access-7tcml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.566078 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5917cd-29a5-4e07-b030-24456d6b0da6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df5917cd-29a5-4e07-b030-24456d6b0da6" (UID: "df5917cd-29a5-4e07-b030-24456d6b0da6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.660374 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.660452 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5917cd-29a5-4e07-b030-24456d6b0da6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.660472 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tcml\" (UniqueName: \"kubernetes.io/projected/df5917cd-29a5-4e07-b030-24456d6b0da6-kube-api-access-7tcml\") on node \"crc\" DevicePath \"\"" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.660487 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5917cd-29a5-4e07-b030-24456d6b0da6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.660805 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.160788981 +0000 UTC m=+172.400233475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.762773 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.763072 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.263008785 +0000 UTC m=+172.502453269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.763757 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.764373 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.26434841 +0000 UTC m=+172.503792904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.837120 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xdkbm"] Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.837460 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5917cd-29a5-4e07-b030-24456d6b0da6" containerName="collect-profiles" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.837483 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5917cd-29a5-4e07-b030-24456d6b0da6" containerName="collect-profiles" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.837617 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5917cd-29a5-4e07-b030-24456d6b0da6" containerName="collect-profiles" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.838651 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.840324 4772 generic.go:334] "Generic (PLEG): container finished" podID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerID="b571c7a729c692ff0eb7ffbca2900a511d414d4b1aaca7b74308ccdb167978a7" exitCode=0 Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.840398 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59qml" event={"ID":"64157fec-2673-403c-96c5-2cbaf3ca17a2","Type":"ContainerDied","Data":"b571c7a729c692ff0eb7ffbca2900a511d414d4b1aaca7b74308ccdb167978a7"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.840424 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59qml" event={"ID":"64157fec-2673-403c-96c5-2cbaf3ca17a2","Type":"ContainerStarted","Data":"53172f132b73a25d4f7430840fc18df3ae77ccdd334f4860336adda997f5a507"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.842425 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.842864 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.843316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" event={"ID":"df5917cd-29a5-4e07-b030-24456d6b0da6","Type":"ContainerDied","Data":"7cb39224eb20363b148a288c781ab9b763227362fa61969c836e70eb6bf0e4c1"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.843345 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.843349 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb39224eb20363b148a288c781ab9b763227362fa61969c836e70eb6bf0e4c1" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.851031 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" event={"ID":"17ee7fc7-887d-4bf4-b408-d1a723605bdc","Type":"ContainerStarted","Data":"cf5c3a4bb0ea8b2aec31825495b975547bd30d3a3249e279b48377cdf57f6d99"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.853755 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9e7566ec-f873-4243-94e6-fe7c38cc07ae","Type":"ContainerStarted","Data":"7c7c35a366938daf7ccd658a210ceff6283b16444441cf7cd76288fe6ff84892"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.855435 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdkbm"] Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.857315 4772 generic.go:334] "Generic (PLEG): container finished" podID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerID="0796d9ea1c6d215ac173fe52d71dde2a5ae6b773618d9488d5cd8349568dedaa" exitCode=0 Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.857492 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m47st" event={"ID":"1c4be727-3b88-4540-a5c9-9c6d19978537","Type":"ContainerDied","Data":"0796d9ea1c6d215ac173fe52d71dde2a5ae6b773618d9488d5cd8349568dedaa"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.857532 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m47st" event={"ID":"1c4be727-3b88-4540-a5c9-9c6d19978537","Type":"ContainerStarted","Data":"4ef65a561fee431e2c2537479fbc7f66131ced3191e492c61b8c5f2515b8815e"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.861272 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2","Type":"ContainerStarted","Data":"adf0374fee84a1808d15c69f2e735fec285fe799025b87b48be013d3b40ee9fd"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.861310 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2","Type":"ContainerStarted","Data":"c0649ac27603ee8b26b42caf87116c64e5079da7aa50045a71ff96b82b8dcc2a"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.864480 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.864976 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.364945871 +0000 UTC m=+172.604390355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.865308 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.865853 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.365834005 +0000 UTC m=+172.605278499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.868290 4772 generic.go:334] "Generic (PLEG): container finished" podID="59d55861-971b-404f-9926-4d41f07f0880" containerID="a68004fa3bec83ed4795eff2db138d4f271a8fb34142a18626a7282048258f1f" exitCode=0 Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.868429 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5g5n6" event={"ID":"59d55861-971b-404f-9926-4d41f07f0880","Type":"ContainerDied","Data":"a68004fa3bec83ed4795eff2db138d4f271a8fb34142a18626a7282048258f1f"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.868491 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5g5n6" event={"ID":"59d55861-971b-404f-9926-4d41f07f0880","Type":"ContainerStarted","Data":"cb8e89aa94ea0510aff5224ae0ceacf7c24dc1c5889fd40fd7bd1f2b3ba7d660"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.878527 4772 generic.go:334] "Generic (PLEG): container finished" podID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerID="f1e124f295076bc82c67d7c9d255f62eb37f8ddd9fe864a65889afa0cc649a9a" exitCode=0 Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.878616 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6b26" event={"ID":"a99b0a8a-5c55-480c-8a9c-274f995658cb","Type":"ContainerDied","Data":"f1e124f295076bc82c67d7c9d255f62eb37f8ddd9fe864a65889afa0cc649a9a"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.878671 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6b26" event={"ID":"a99b0a8a-5c55-480c-8a9c-274f995658cb","Type":"ContainerStarted","Data":"47d8c153c7981f2cb23946153944489389715d087541a9fb9f1dbc63a0db6460"} Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.951413 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.951385701 podStartE2EDuration="2.951385701s" podCreationTimestamp="2025-11-22 10:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:51.947776886 +0000 UTC m=+172.187221380" watchObservedRunningTime="2025-11-22 10:40:51.951385701 +0000 UTC m=+172.190830195" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.969181 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.969674 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-catalog-content\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.969831 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpz6\" (UniqueName: \"kubernetes.io/projected/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-kube-api-access-krpz6\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:51 crc kubenswrapper[4772]: I1122 10:40:51.969882 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-utilities\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:51 crc kubenswrapper[4772]: E1122 10:40:51.972386 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.472171987 +0000 UTC m=+172.711616481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.021849 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.021781289 podStartE2EDuration="3.021781289s" podCreationTimestamp="2025-11-22 10:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:52.019003646 +0000 UTC m=+172.258448150" watchObservedRunningTime="2025-11-22 10:40:52.021781289 +0000 UTC m=+172.261225793" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.073806 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpz6\" (UniqueName: \"kubernetes.io/projected/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-kube-api-access-krpz6\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.073875 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-utilities\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.073939 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-catalog-content\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.073976 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:52 crc kubenswrapper[4772]: E1122 10:40:52.074411 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.574392571 +0000 UTC m=+172.813837065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.074611 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-utilities\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.074719 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-catalog-content\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.101680 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpz6\" (UniqueName: \"kubernetes.io/projected/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-kube-api-access-krpz6\") pod \"redhat-marketplace-xdkbm\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.161763 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:40:52 crc kubenswrapper[4772]: E1122 10:40:52.175151 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.675124886 +0000 UTC m=+172.914569380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.175201 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.175472 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:52 crc kubenswrapper[4772]: E1122 10:40:52.175793 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 10:40:52.675784163 +0000 UTC m=+172.915228657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hlvq7" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.212455 4772 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-22T10:40:51.365698553Z","Handler":null,"Name":""} Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.232423 4772 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.232488 4772 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.238755 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-stqj9"] Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.240028 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.248178 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:52 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:52 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:52 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.248276 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.260928 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqj9"] Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.276783 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.285926 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.391314 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-utilities\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.391401 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.391446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-catalog-content\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.391494 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mqt4\" (UniqueName: \"kubernetes.io/projected/41028362-8aac-4ba3-a617-c7b65e42a368-kube-api-access-5mqt4\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.400366 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.400439 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.455871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hlvq7\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.492234 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-catalog-content\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.492296 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mqt4\" (UniqueName: \"kubernetes.io/projected/41028362-8aac-4ba3-a617-c7b65e42a368-kube-api-access-5mqt4\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.492367 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-utilities\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.492810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-catalog-content\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.493155 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-utilities\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.514085 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mqt4\" (UniqueName: \"kubernetes.io/projected/41028362-8aac-4ba3-a617-c7b65e42a368-kube-api-access-5mqt4\") pod \"redhat-marketplace-stqj9\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.517508 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdkbm"] Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.558896 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.594479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.602199 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c89edce7-fac8-4954-b2e9-420f0f2de6a8-metrics-certs\") pod \"network-metrics-daemon-fvsrl\" (UID: \"c89edce7-fac8-4954-b2e9-420f0f2de6a8\") " pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.606568 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.839901 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hq9vf"] Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.841173 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.850486 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.851753 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq9vf"] Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.859769 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hlvq7"] Nov 22 10:40:52 crc kubenswrapper[4772]: W1122 10:40:52.875801 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723118f2_f91b_4ca0_a6f9_4deaee014ef0.slice/crio-09fb9f4d414bfcc55de4f91a8c4cbd2df672d4db29dece59d1f0574a0b00933d WatchSource:0}: Error finding container 09fb9f4d414bfcc55de4f91a8c4cbd2df672d4db29dece59d1f0574a0b00933d: Status 404 returned error can't find the container with id 09fb9f4d414bfcc55de4f91a8c4cbd2df672d4db29dece59d1f0574a0b00933d Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.892906 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fvsrl" Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.915448 4772 generic.go:334] "Generic (PLEG): container finished" podID="c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2" containerID="adf0374fee84a1808d15c69f2e735fec285fe799025b87b48be013d3b40ee9fd" exitCode=0 Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.915541 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2","Type":"ContainerDied","Data":"adf0374fee84a1808d15c69f2e735fec285fe799025b87b48be013d3b40ee9fd"} Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.919869 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdkbm" event={"ID":"a35d30b7-cf6e-4d08-add4-2c9b27342e5d","Type":"ContainerStarted","Data":"8f3c02f0ca05b0ae35a598dfd7604fe53b83d661507aa350801b0ca1cd0481bd"} Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.928579 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" event={"ID":"723118f2-f91b-4ca0-a6f9-4deaee014ef0","Type":"ContainerStarted","Data":"09fb9f4d414bfcc55de4f91a8c4cbd2df672d4db29dece59d1f0574a0b00933d"} Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.934037 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" event={"ID":"17ee7fc7-887d-4bf4-b408-d1a723605bdc","Type":"ContainerStarted","Data":"e601e001eff4d6206a2db55c9ddaf001a4be3721a8e8c52b9532116fa32e2ee3"} Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.940769 4772 generic.go:334] "Generic (PLEG): container finished" podID="9e7566ec-f873-4243-94e6-fe7c38cc07ae" containerID="7c7c35a366938daf7ccd658a210ceff6283b16444441cf7cd76288fe6ff84892" exitCode=0 Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.940834 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9e7566ec-f873-4243-94e6-fe7c38cc07ae","Type":"ContainerDied","Data":"7c7c35a366938daf7ccd658a210ceff6283b16444441cf7cd76288fe6ff84892"} Nov 22 10:40:52 crc kubenswrapper[4772]: I1122 10:40:52.967981 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jzjnj" podStartSLOduration=20.967940141 podStartE2EDuration="20.967940141s" podCreationTimestamp="2025-11-22 10:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:52.965674451 +0000 UTC m=+173.205118965" watchObservedRunningTime="2025-11-22 10:40:52.967940141 +0000 UTC m=+173.207384635" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.004447 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-catalog-content\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.004522 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-utilities\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.004558 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpjx\" (UniqueName: \"kubernetes.io/projected/c6f45ecf-368b-4e09-83d6-c0620de2c97e-kube-api-access-6vpjx\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.030602 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqj9"] Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.106538 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-catalog-content\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.106620 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-utilities\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.106654 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vpjx\" (UniqueName: \"kubernetes.io/projected/c6f45ecf-368b-4e09-83d6-c0620de2c97e-kube-api-access-6vpjx\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.107922 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-catalog-content\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.108279 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-utilities\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.128533 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vpjx\" (UniqueName: \"kubernetes.io/projected/c6f45ecf-368b-4e09-83d6-c0620de2c97e-kube-api-access-6vpjx\") pod \"redhat-operators-hq9vf\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.160102 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.210083 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fvsrl"] Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.246192 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9d42"] Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.249108 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.249973 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:53 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:53 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:53 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.250013 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.252619 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9d42"] Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.379881 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq9vf"] Nov 22 10:40:53 crc kubenswrapper[4772]: W1122 10:40:53.390848 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6f45ecf_368b_4e09_83d6_c0620de2c97e.slice/crio-8b2e85daa8368470d0a4a3445ae05f2084c2239362292664223a41d8e4adcb72 WatchSource:0}: Error finding container 8b2e85daa8368470d0a4a3445ae05f2084c2239362292664223a41d8e4adcb72: Status 404 returned error can't find the container with id 8b2e85daa8368470d0a4a3445ae05f2084c2239362292664223a41d8e4adcb72 Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.414582 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tprwl\" (UniqueName: \"kubernetes.io/projected/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-kube-api-access-tprwl\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.414654 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-catalog-content\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.414740 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-utilities\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.426067 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.516306 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tprwl\" (UniqueName: \"kubernetes.io/projected/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-kube-api-access-tprwl\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.516372 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-catalog-content\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.516443 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-utilities\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.516997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-utilities\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.518164 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-catalog-content\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.528437 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9vsnk" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.547429 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tprwl\" (UniqueName: \"kubernetes.io/projected/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-kube-api-access-tprwl\") pod \"redhat-operators-r9d42\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.566040 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.818850 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9d42"] Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.949324 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9vf" event={"ID":"c6f45ecf-368b-4e09-83d6-c0620de2c97e","Type":"ContainerStarted","Data":"8b2e85daa8368470d0a4a3445ae05f2084c2239362292664223a41d8e4adcb72"} Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.950306 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqj9" event={"ID":"41028362-8aac-4ba3-a617-c7b65e42a368","Type":"ContainerStarted","Data":"410775998cef2552d1413f489d601fe684816639a3890c20b64f009421ea45b2"} Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.951552 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9d42" event={"ID":"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9","Type":"ContainerStarted","Data":"4ab6edda094cc58edea50d430c5d6b95e36ec94e1f5b917e4bf38e106f009ca6"} Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.953379 4772 generic.go:334] "Generic (PLEG): container finished" podID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerID="a4704c674b990f7cc91d8e9bcbae46f1c6f9e8012792214537b34df310f3fa60" exitCode=0 Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.953481 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdkbm" event={"ID":"a35d30b7-cf6e-4d08-add4-2c9b27342e5d","Type":"ContainerDied","Data":"a4704c674b990f7cc91d8e9bcbae46f1c6f9e8012792214537b34df310f3fa60"} Nov 22 10:40:53 crc kubenswrapper[4772]: I1122 10:40:53.956811 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" event={"ID":"c89edce7-fac8-4954-b2e9-420f0f2de6a8","Type":"ContainerStarted","Data":"bc716b2715f9a2fce16c7744e29cc6600e039368d7d07f26b9178202cbfb80cc"} Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.194783 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.248282 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:54 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:54 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:54 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.248403 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.328259 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kube-api-access\") pod \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.328451 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kubelet-dir\") pod \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\" (UID: \"9e7566ec-f873-4243-94e6-fe7c38cc07ae\") " Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.329482 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9e7566ec-f873-4243-94e6-fe7c38cc07ae" (UID: "9e7566ec-f873-4243-94e6-fe7c38cc07ae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.336003 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9e7566ec-f873-4243-94e6-fe7c38cc07ae" (UID: "9e7566ec-f873-4243-94e6-fe7c38cc07ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.395982 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.429638 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.429673 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e7566ec-f873-4243-94e6-fe7c38cc07ae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.464064 4772 patch_prober.go:28] interesting pod/console-f9d7485db-ld9hg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.464152 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ld9hg" podUID="614def41-0349-470c-afca-e5c335fa8834" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.531235 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kube-api-access\") pod \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.531991 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kubelet-dir\") pod \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\" (UID: \"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2\") " Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.532402 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2" (UID: "c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.535657 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2" (UID: "c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.627292 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.633708 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-56sqr" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.633877 4772 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.633926 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.735642 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5znz6" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.746157 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.746190 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.746255 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.746262 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.746999 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.908548 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.993464 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" event={"ID":"c89edce7-fac8-4954-b2e9-420f0f2de6a8","Type":"ContainerStarted","Data":"f5e83b983011dbd15e0f4b2aa60b59de65f9db0669051edd1b4c6e4a25956245"} Nov 22 10:40:54 crc kubenswrapper[4772]: I1122 10:40:54.993524 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fvsrl" event={"ID":"c89edce7-fac8-4954-b2e9-420f0f2de6a8","Type":"ContainerStarted","Data":"7c0aa3da351eb09dd2eb1d68c9ffc44aebb7097e7108610fbd58e7ad16655f3f"} Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.023935 4772 generic.go:334] "Generic (PLEG): container finished" podID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerID="705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef" exitCode=0 Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.024152 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9vf" event={"ID":"c6f45ecf-368b-4e09-83d6-c0620de2c97e","Type":"ContainerDied","Data":"705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef"} Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.034416 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fvsrl" podStartSLOduration=144.034386878 podStartE2EDuration="2m24.034386878s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:55.03332147 +0000 UTC m=+175.272765964" watchObservedRunningTime="2025-11-22 10:40:55.034386878 +0000 UTC m=+175.273831372" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.051816 4772 generic.go:334] "Generic (PLEG): container finished" podID="41028362-8aac-4ba3-a617-c7b65e42a368" containerID="810c0dae5107811965fde58c06cf3e9f040f08d959738f17ec0ec1f4a24a044a" exitCode=0 Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.051934 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqj9" event={"ID":"41028362-8aac-4ba3-a617-c7b65e42a368","Type":"ContainerDied","Data":"810c0dae5107811965fde58c06cf3e9f040f08d959738f17ec0ec1f4a24a044a"} Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.068483 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.069519 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2","Type":"ContainerDied","Data":"c0649ac27603ee8b26b42caf87116c64e5079da7aa50045a71ff96b82b8dcc2a"} Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.069616 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0649ac27603ee8b26b42caf87116c64e5079da7aa50045a71ff96b82b8dcc2a" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.074155 4772 generic.go:334] "Generic (PLEG): container finished" podID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerID="3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf" exitCode=0 Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.074243 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9d42" event={"ID":"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9","Type":"ContainerDied","Data":"3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf"} Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.083978 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" event={"ID":"723118f2-f91b-4ca0-a6f9-4deaee014ef0","Type":"ContainerStarted","Data":"75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4"} Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.084466 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.087485 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.090491 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9e7566ec-f873-4243-94e6-fe7c38cc07ae","Type":"ContainerDied","Data":"1f536a6a8af3c579c7c5353d272f1e2cac1ce21dfb95228e103a59cd3673821a"} Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.090536 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f536a6a8af3c579c7c5353d272f1e2cac1ce21dfb95228e103a59cd3673821a" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.256244 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:55 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:55 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:55 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.256345 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.652605 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.655983 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.666663 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.675481 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dfw97" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.685666 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" podStartSLOduration=144.685645357 podStartE2EDuration="2m24.685645357s" podCreationTimestamp="2025-11-22 10:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:40:55.164455363 +0000 UTC m=+175.403899867" watchObservedRunningTime="2025-11-22 10:40:55.685645357 +0000 UTC m=+175.925089851" Nov 22 10:40:55 crc kubenswrapper[4772]: I1122 10:40:55.745021 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hn48r" Nov 22 10:40:56 crc kubenswrapper[4772]: I1122 10:40:56.020108 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lkn87" Nov 22 10:40:56 crc kubenswrapper[4772]: I1122 10:40:56.246281 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:56 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:56 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:56 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:56 crc kubenswrapper[4772]: I1122 10:40:56.246357 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:57 crc kubenswrapper[4772]: I1122 10:40:57.247040 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:57 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:57 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:57 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:57 crc kubenswrapper[4772]: I1122 10:40:57.248355 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:58 crc kubenswrapper[4772]: I1122 10:40:58.249550 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:58 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:58 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:58 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:58 crc kubenswrapper[4772]: I1122 10:40:58.249659 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:40:59 crc kubenswrapper[4772]: I1122 10:40:59.245449 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:40:59 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:40:59 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:40:59 crc kubenswrapper[4772]: healthz check failed Nov 22 10:40:59 crc kubenswrapper[4772]: I1122 10:40:59.245525 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:41:00 crc kubenswrapper[4772]: I1122 10:41:00.248344 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:41:00 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:41:00 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:41:00 crc kubenswrapper[4772]: healthz check failed Nov 22 10:41:00 crc kubenswrapper[4772]: I1122 10:41:00.248866 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:41:01 crc kubenswrapper[4772]: I1122 10:41:01.245391 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:41:01 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:41:01 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:41:01 crc kubenswrapper[4772]: healthz check failed Nov 22 10:41:01 crc kubenswrapper[4772]: I1122 10:41:01.245494 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:41:01 crc kubenswrapper[4772]: I1122 10:41:01.533556 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:41:01 crc kubenswrapper[4772]: I1122 10:41:01.533656 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:41:02 crc kubenswrapper[4772]: I1122 10:41:02.248610 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:41:02 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:41:02 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:41:02 crc kubenswrapper[4772]: healthz check failed Nov 22 10:41:02 crc kubenswrapper[4772]: I1122 10:41:02.249068 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:41:03 crc kubenswrapper[4772]: I1122 10:41:03.249282 4772 patch_prober.go:28] interesting pod/router-default-5444994796-wc24s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 10:41:03 crc kubenswrapper[4772]: [-]has-synced failed: reason withheld Nov 22 10:41:03 crc kubenswrapper[4772]: [+]process-running ok Nov 22 10:41:03 crc kubenswrapper[4772]: healthz check failed Nov 22 10:41:03 crc kubenswrapper[4772]: I1122 10:41:03.249372 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wc24s" podUID="8787d05b-35f8-4862-9b4f-53e18d3b56ef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.248068 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.251511 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wc24s" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.470136 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.474255 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.746671 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.746753 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.746772 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.746856 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.746928 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.747504 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.747575 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.747887 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c1d0a8c902a7d79792d810a53613271354db3576d9931827d0551883f49d6138"} pod="openshift-console/downloads-7954f5f757-v2gm9" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 22 10:41:04 crc kubenswrapper[4772]: I1122 10:41:04.748015 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" containerID="cri-o://c1d0a8c902a7d79792d810a53613271354db3576d9931827d0551883f49d6138" gracePeriod=2 Nov 22 10:41:08 crc kubenswrapper[4772]: I1122 10:41:08.200193 4772 generic.go:334] "Generic (PLEG): container finished" podID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerID="c1d0a8c902a7d79792d810a53613271354db3576d9931827d0551883f49d6138" exitCode=0 Nov 22 10:41:08 crc kubenswrapper[4772]: I1122 10:41:08.200285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v2gm9" event={"ID":"a8554d37-40ae-41ef-bed9-7c79b3f8083e","Type":"ContainerDied","Data":"c1d0a8c902a7d79792d810a53613271354db3576d9931827d0551883f49d6138"} Nov 22 10:41:12 crc kubenswrapper[4772]: I1122 10:41:12.618716 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:41:14 crc kubenswrapper[4772]: I1122 10:41:14.747443 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:14 crc kubenswrapper[4772]: I1122 10:41:14.747933 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:41:15 crc kubenswrapper[4772]: I1122 10:41:15.663325 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5sxdt" Nov 22 10:41:18 crc kubenswrapper[4772]: I1122 10:41:18.211093 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 10:41:24 crc kubenswrapper[4772]: I1122 10:41:24.750342 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:24 crc kubenswrapper[4772]: I1122 10:41:24.751498 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:41:27 crc kubenswrapper[4772]: E1122 10:41:27.210134 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 10:41:27 crc kubenswrapper[4772]: E1122 10:41:27.210339 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xc6gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m47st_openshift-marketplace(1c4be727-3b88-4540-a5c9-9c6d19978537): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 10:41:27 crc kubenswrapper[4772]: E1122 10:41:27.212102 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m47st" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" Nov 22 10:41:31 crc kubenswrapper[4772]: I1122 10:41:31.533257 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:41:31 crc kubenswrapper[4772]: I1122 10:41:31.533350 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:41:34 crc kubenswrapper[4772]: E1122 10:41:34.200228 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m47st" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" Nov 22 10:41:34 crc kubenswrapper[4772]: E1122 10:41:34.282306 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 10:41:34 crc kubenswrapper[4772]: E1122 10:41:34.282551 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mqt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-stqj9_openshift-marketplace(41028362-8aac-4ba3-a617-c7b65e42a368): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 10:41:34 crc kubenswrapper[4772]: E1122 10:41:34.283814 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-stqj9" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" Nov 22 10:41:34 crc kubenswrapper[4772]: I1122 10:41:34.746153 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:34 crc kubenswrapper[4772]: I1122 10:41:34.746251 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:41:44 crc kubenswrapper[4772]: I1122 10:41:44.746680 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:44 crc kubenswrapper[4772]: I1122 10:41:44.747892 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:41:54 crc kubenswrapper[4772]: I1122 10:41:54.746168 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:41:54 crc kubenswrapper[4772]: I1122 10:41:54.747484 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:01 crc kubenswrapper[4772]: I1122 10:42:01.533181 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:42:01 crc kubenswrapper[4772]: I1122 10:42:01.533288 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:42:01 crc kubenswrapper[4772]: I1122 10:42:01.533371 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:42:01 crc kubenswrapper[4772]: I1122 10:42:01.535176 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:42:01 crc kubenswrapper[4772]: I1122 10:42:01.535297 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd" gracePeriod=600 Nov 22 10:42:04 crc kubenswrapper[4772]: I1122 10:42:04.596754 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd" exitCode=0 Nov 22 10:42:04 crc kubenswrapper[4772]: I1122 10:42:04.596862 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd"} Nov 22 10:42:04 crc kubenswrapper[4772]: I1122 10:42:04.746454 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:04 crc kubenswrapper[4772]: I1122 10:42:04.746567 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:10 crc kubenswrapper[4772]: E1122 10:42:10.098026 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 10:42:10 crc kubenswrapper[4772]: E1122 10:42:10.098296 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64fhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-59qml_openshift-marketplace(64157fec-2673-403c-96c5-2cbaf3ca17a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 10:42:10 crc kubenswrapper[4772]: E1122 10:42:10.099595 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-59qml" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.421376 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-59qml" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.444328 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.444579 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krpz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xdkbm_openshift-marketplace(a35d30b7-cf6e-4d08-add4-2c9b27342e5d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.446113 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xdkbm" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.453596 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.453802 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tprwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r9d42_openshift-marketplace(dd2e49a9-b8b7-47f6-a489-1d5ac70105b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.455341 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r9d42" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.488214 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.488872 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vpjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hq9vf_openshift-marketplace(c6f45ecf-368b-4e09-83d6-c0620de2c97e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.490189 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hq9vf" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.667278 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xdkbm" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.667368 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r9d42" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" Nov 22 10:42:13 crc kubenswrapper[4772]: E1122 10:42:13.667528 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hq9vf" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.670713 4772 generic.go:334] "Generic (PLEG): container finished" podID="59d55861-971b-404f-9926-4d41f07f0880" containerID="e78560ec0873907f668dcd71dd30cdf4ac1a4ead0a179fb5d7f53db2f3859bb0" exitCode=0 Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.670926 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5g5n6" event={"ID":"59d55861-971b-404f-9926-4d41f07f0880","Type":"ContainerDied","Data":"e78560ec0873907f668dcd71dd30cdf4ac1a4ead0a179fb5d7f53db2f3859bb0"} Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.675535 4772 generic.go:334] "Generic (PLEG): container finished" podID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerID="0542181219c6f2a5d4b663f2c6452f018a9b2b0a88700dfe00a3c32e993afff8" exitCode=0 Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.675668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6b26" event={"ID":"a99b0a8a-5c55-480c-8a9c-274f995658cb","Type":"ContainerDied","Data":"0542181219c6f2a5d4b663f2c6452f018a9b2b0a88700dfe00a3c32e993afff8"} Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.681655 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"ab514550d792c628ef77edacd4d44003f6f64f78cadfba4aca08099f82d843e9"} Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.717335 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v2gm9" event={"ID":"a8554d37-40ae-41ef-bed9-7c79b3f8083e","Type":"ContainerStarted","Data":"b9a661cb12c02ad110579631115a7ea2561f78363a764e970ac5ad4c2615eb91"} Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.748158 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:14 crc kubenswrapper[4772]: I1122 10:42:14.748267 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:15 crc kubenswrapper[4772]: I1122 10:42:15.723903 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:42:15 crc kubenswrapper[4772]: I1122 10:42:15.724269 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:15 crc kubenswrapper[4772]: I1122 10:42:15.724483 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:16 crc kubenswrapper[4772]: I1122 10:42:16.731091 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:16 crc kubenswrapper[4772]: I1122 10:42:16.731582 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:17 crc kubenswrapper[4772]: I1122 10:42:17.736087 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:17 crc kubenswrapper[4772]: I1122 10:42:17.736768 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:18 crc kubenswrapper[4772]: I1122 10:42:18.747281 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m47st" event={"ID":"1c4be727-3b88-4540-a5c9-9c6d19978537","Type":"ContainerStarted","Data":"5e1bcdc41487c568741912cf484256c0dd1e77ca717d02b24aa015064d2828fa"} Nov 22 10:42:18 crc kubenswrapper[4772]: I1122 10:42:18.750323 4772 generic.go:334] "Generic (PLEG): container finished" podID="41028362-8aac-4ba3-a617-c7b65e42a368" containerID="c6ddaf123f47348fd0a1f5af52621dbb76c51e49a786c56a983b78b4dde88763" exitCode=0 Nov 22 10:42:18 crc kubenswrapper[4772]: I1122 10:42:18.750465 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqj9" event={"ID":"41028362-8aac-4ba3-a617-c7b65e42a368","Type":"ContainerDied","Data":"c6ddaf123f47348fd0a1f5af52621dbb76c51e49a786c56a983b78b4dde88763"} Nov 22 10:42:18 crc kubenswrapper[4772]: I1122 10:42:18.754705 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5g5n6" event={"ID":"59d55861-971b-404f-9926-4d41f07f0880","Type":"ContainerStarted","Data":"44e4e2fa32c694449370ba547c0d7ec63af38d8b1aa4c942d75e8f0d9aa2b0e7"} Nov 22 10:42:18 crc kubenswrapper[4772]: I1122 10:42:18.814471 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5g5n6" podStartSLOduration=3.967880842 podStartE2EDuration="1m29.814432905s" podCreationTimestamp="2025-11-22 10:40:49 +0000 UTC" firstStartedPulling="2025-11-22 10:40:51.870106127 +0000 UTC m=+172.109550621" lastFinishedPulling="2025-11-22 10:42:17.71665819 +0000 UTC m=+257.956102684" observedRunningTime="2025-11-22 10:42:18.804199258 +0000 UTC m=+259.043643852" watchObservedRunningTime="2025-11-22 10:42:18.814432905 +0000 UTC m=+259.053877439" Nov 22 10:42:19 crc kubenswrapper[4772]: I1122 10:42:19.764346 4772 generic.go:334] "Generic (PLEG): container finished" podID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerID="5e1bcdc41487c568741912cf484256c0dd1e77ca717d02b24aa015064d2828fa" exitCode=0 Nov 22 10:42:19 crc kubenswrapper[4772]: I1122 10:42:19.764415 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m47st" event={"ID":"1c4be727-3b88-4540-a5c9-9c6d19978537","Type":"ContainerDied","Data":"5e1bcdc41487c568741912cf484256c0dd1e77ca717d02b24aa015064d2828fa"} Nov 22 10:42:20 crc kubenswrapper[4772]: I1122 10:42:20.257527 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:42:20 crc kubenswrapper[4772]: I1122 10:42:20.257653 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:42:22 crc kubenswrapper[4772]: I1122 10:42:22.564597 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5g5n6" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="registry-server" probeResult="failure" output=< Nov 22 10:42:22 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 10:42:22 crc kubenswrapper[4772]: > Nov 22 10:42:24 crc kubenswrapper[4772]: I1122 10:42:24.746660 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:24 crc kubenswrapper[4772]: I1122 10:42:24.747090 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:24 crc kubenswrapper[4772]: I1122 10:42:24.746755 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:24 crc kubenswrapper[4772]: I1122 10:42:24.747179 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:26 crc kubenswrapper[4772]: I1122 10:42:26.815440 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6b26" event={"ID":"a99b0a8a-5c55-480c-8a9c-274f995658cb","Type":"ContainerStarted","Data":"67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3"} Nov 22 10:42:27 crc kubenswrapper[4772]: I1122 10:42:27.846008 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n6b26" podStartSLOduration=4.796925541 podStartE2EDuration="1m37.845985923s" podCreationTimestamp="2025-11-22 10:40:50 +0000 UTC" firstStartedPulling="2025-11-22 10:40:51.880684545 +0000 UTC m=+172.120129039" lastFinishedPulling="2025-11-22 10:42:24.929744917 +0000 UTC m=+265.169189421" observedRunningTime="2025-11-22 10:42:27.845317617 +0000 UTC m=+268.084762161" watchObservedRunningTime="2025-11-22 10:42:27.845985923 +0000 UTC m=+268.085430417" Nov 22 10:42:30 crc kubenswrapper[4772]: I1122 10:42:30.339384 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:42:30 crc kubenswrapper[4772]: I1122 10:42:30.403168 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:42:30 crc kubenswrapper[4772]: I1122 10:42:30.580640 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:42:30 crc kubenswrapper[4772]: I1122 10:42:30.580725 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:42:30 crc kubenswrapper[4772]: I1122 10:42:30.640339 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:34.746701 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:34.747689 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:34.746788 4772 patch_prober.go:28] interesting pod/downloads-7954f5f757-v2gm9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:34.748158 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v2gm9" podUID="a8554d37-40ae-41ef-bed9-7c79b3f8083e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:34.867731 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m47st" event={"ID":"1c4be727-3b88-4540-a5c9-9c6d19978537","Type":"ContainerStarted","Data":"ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78"} Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:34.870197 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqj9" event={"ID":"41028362-8aac-4ba3-a617-c7b65e42a368","Type":"ContainerStarted","Data":"32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b"} Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:35.905300 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-stqj9" podStartSLOduration=8.29474293 podStartE2EDuration="1m43.905265375s" podCreationTimestamp="2025-11-22 10:40:52 +0000 UTC" firstStartedPulling="2025-11-22 10:40:55.054889726 +0000 UTC m=+175.294334220" lastFinishedPulling="2025-11-22 10:42:30.665412171 +0000 UTC m=+270.904856665" observedRunningTime="2025-11-22 10:42:35.904018303 +0000 UTC m=+276.143462857" watchObservedRunningTime="2025-11-22 10:42:35.905265375 +0000 UTC m=+276.144709919" Nov 22 10:42:39 crc kubenswrapper[4772]: I1122 10:42:35.938798 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m47st" podStartSLOduration=5.027641495 podStartE2EDuration="1m45.938766425s" podCreationTimestamp="2025-11-22 10:40:50 +0000 UTC" firstStartedPulling="2025-11-22 10:40:51.859146009 +0000 UTC m=+172.098590503" lastFinishedPulling="2025-11-22 10:42:32.770270899 +0000 UTC m=+273.009715433" observedRunningTime="2025-11-22 10:42:35.93415793 +0000 UTC m=+276.173602434" watchObservedRunningTime="2025-11-22 10:42:35.938766425 +0000 UTC m=+276.178210929" Nov 22 10:42:41 crc kubenswrapper[4772]: I1122 10:42:40.440507 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:42:41 crc kubenswrapper[4772]: I1122 10:42:40.441628 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:42:41 crc kubenswrapper[4772]: I1122 10:42:40.486890 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:42:41 crc kubenswrapper[4772]: I1122 10:42:40.635337 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:42:41 crc kubenswrapper[4772]: I1122 10:42:40.716476 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n6b26"] Nov 22 10:42:41 crc kubenswrapper[4772]: I1122 10:42:40.905356 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n6b26" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="registry-server" containerID="cri-o://67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3" gracePeriod=2 Nov 22 10:42:41 crc kubenswrapper[4772]: I1122 10:42:40.947354 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:42:42 crc kubenswrapper[4772]: I1122 10:42:42.559403 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:42:42 crc kubenswrapper[4772]: I1122 10:42:42.560146 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:42:42 crc kubenswrapper[4772]: I1122 10:42:42.626105 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:42:42 crc kubenswrapper[4772]: I1122 10:42:42.924237 4772 generic.go:334] "Generic (PLEG): container finished" podID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerID="67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3" exitCode=0 Nov 22 10:42:42 crc kubenswrapper[4772]: I1122 10:42:42.924322 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6b26" event={"ID":"a99b0a8a-5c55-480c-8a9c-274f995658cb","Type":"ContainerDied","Data":"67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3"} Nov 22 10:42:42 crc kubenswrapper[4772]: I1122 10:42:42.983461 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:42:43 crc kubenswrapper[4772]: I1122 10:42:43.340177 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m47st"] Nov 22 10:42:43 crc kubenswrapper[4772]: I1122 10:42:43.340699 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m47st" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="registry-server" containerID="cri-o://ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78" gracePeriod=2 Nov 22 10:42:44 crc kubenswrapper[4772]: I1122 10:42:44.754901 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-v2gm9" Nov 22 10:42:45 crc kubenswrapper[4772]: I1122 10:42:45.122112 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqj9"] Nov 22 10:42:45 crc kubenswrapper[4772]: I1122 10:42:45.122441 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-stqj9" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="registry-server" containerID="cri-o://32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b" gracePeriod=2 Nov 22 10:42:45 crc kubenswrapper[4772]: I1122 10:42:45.955161 4772 generic.go:334] "Generic (PLEG): container finished" podID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerID="ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78" exitCode=0 Nov 22 10:42:45 crc kubenswrapper[4772]: I1122 10:42:45.955295 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m47st" event={"ID":"1c4be727-3b88-4540-a5c9-9c6d19978537","Type":"ContainerDied","Data":"ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78"} Nov 22 10:42:47 crc kubenswrapper[4772]: I1122 10:42:47.976845 4772 generic.go:334] "Generic (PLEG): container finished" podID="41028362-8aac-4ba3-a617-c7b65e42a368" containerID="32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b" exitCode=0 Nov 22 10:42:47 crc kubenswrapper[4772]: I1122 10:42:47.976933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqj9" event={"ID":"41028362-8aac-4ba3-a617-c7b65e42a368","Type":"ContainerDied","Data":"32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b"} Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.440807 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78 is running failed: container process not found" containerID="ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.441614 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78 is running failed: container process not found" containerID="ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.442242 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78 is running failed: container process not found" containerID="ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.442374 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-m47st" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="registry-server" Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.581410 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3 is running failed: container process not found" containerID="67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.582264 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3 is running failed: container process not found" containerID="67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.582874 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3 is running failed: container process not found" containerID="67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:50 crc kubenswrapper[4772]: E1122 10:42:50.582941 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-n6b26" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="registry-server" Nov 22 10:42:52 crc kubenswrapper[4772]: E1122 10:42:52.560755 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b is running failed: container process not found" containerID="32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:52 crc kubenswrapper[4772]: E1122 10:42:52.561797 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b is running failed: container process not found" containerID="32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:52 crc kubenswrapper[4772]: E1122 10:42:52.562338 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b is running failed: container process not found" containerID="32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 10:42:52 crc kubenswrapper[4772]: E1122 10:42:52.562371 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-stqj9" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="registry-server" Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.760042 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.788478 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntj9v\" (UniqueName: \"kubernetes.io/projected/a99b0a8a-5c55-480c-8a9c-274f995658cb-kube-api-access-ntj9v\") pod \"a99b0a8a-5c55-480c-8a9c-274f995658cb\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.788549 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-catalog-content\") pod \"a99b0a8a-5c55-480c-8a9c-274f995658cb\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.788585 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-utilities\") pod \"a99b0a8a-5c55-480c-8a9c-274f995658cb\" (UID: \"a99b0a8a-5c55-480c-8a9c-274f995658cb\") " Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.790866 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-utilities" (OuterVolumeSpecName: "utilities") pod "a99b0a8a-5c55-480c-8a9c-274f995658cb" (UID: "a99b0a8a-5c55-480c-8a9c-274f995658cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.803481 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99b0a8a-5c55-480c-8a9c-274f995658cb-kube-api-access-ntj9v" (OuterVolumeSpecName: "kube-api-access-ntj9v") pod "a99b0a8a-5c55-480c-8a9c-274f995658cb" (UID: "a99b0a8a-5c55-480c-8a9c-274f995658cb"). InnerVolumeSpecName "kube-api-access-ntj9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.890361 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntj9v\" (UniqueName: \"kubernetes.io/projected/a99b0a8a-5c55-480c-8a9c-274f995658cb-kube-api-access-ntj9v\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:53 crc kubenswrapper[4772]: I1122 10:42:53.890413 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:54 crc kubenswrapper[4772]: I1122 10:42:54.029389 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6b26" event={"ID":"a99b0a8a-5c55-480c-8a9c-274f995658cb","Type":"ContainerDied","Data":"47d8c153c7981f2cb23946153944489389715d087541a9fb9f1dbc63a0db6460"} Nov 22 10:42:54 crc kubenswrapper[4772]: I1122 10:42:54.029461 4772 scope.go:117] "RemoveContainer" containerID="67602c185cc9069230a9cecd4c959a1880eb4c62b4a1670e2532380f93cee1c3" Nov 22 10:42:54 crc kubenswrapper[4772]: I1122 10:42:54.029611 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6b26" Nov 22 10:42:54 crc kubenswrapper[4772]: I1122 10:42:54.326508 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a99b0a8a-5c55-480c-8a9c-274f995658cb" (UID: "a99b0a8a-5c55-480c-8a9c-274f995658cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:42:54 crc kubenswrapper[4772]: I1122 10:42:54.362294 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n6b26"] Nov 22 10:42:54 crc kubenswrapper[4772]: I1122 10:42:54.365165 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n6b26"] Nov 22 10:42:54 crc kubenswrapper[4772]: I1122 10:42:54.397647 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99b0a8a-5c55-480c-8a9c-274f995658cb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:55 crc kubenswrapper[4772]: I1122 10:42:55.427629 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" path="/var/lib/kubelet/pods/a99b0a8a-5c55-480c-8a9c-274f995658cb/volumes" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.843400 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.849456 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mqt4\" (UniqueName: \"kubernetes.io/projected/41028362-8aac-4ba3-a617-c7b65e42a368-kube-api-access-5mqt4\") pod \"41028362-8aac-4ba3-a617-c7b65e42a368\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.849517 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-catalog-content\") pod \"41028362-8aac-4ba3-a617-c7b65e42a368\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.849557 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-utilities\") pod \"41028362-8aac-4ba3-a617-c7b65e42a368\" (UID: \"41028362-8aac-4ba3-a617-c7b65e42a368\") " Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.850697 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-utilities" (OuterVolumeSpecName: "utilities") pod "41028362-8aac-4ba3-a617-c7b65e42a368" (UID: "41028362-8aac-4ba3-a617-c7b65e42a368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.851284 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.859883 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41028362-8aac-4ba3-a617-c7b65e42a368-kube-api-access-5mqt4" (OuterVolumeSpecName: "kube-api-access-5mqt4") pod "41028362-8aac-4ba3-a617-c7b65e42a368" (UID: "41028362-8aac-4ba3-a617-c7b65e42a368"). InnerVolumeSpecName "kube-api-access-5mqt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.871979 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41028362-8aac-4ba3-a617-c7b65e42a368" (UID: "41028362-8aac-4ba3-a617-c7b65e42a368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.950908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-catalog-content\") pod \"1c4be727-3b88-4540-a5c9-9c6d19978537\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.950998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-utilities\") pod \"1c4be727-3b88-4540-a5c9-9c6d19978537\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.951026 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc6gd\" (UniqueName: \"kubernetes.io/projected/1c4be727-3b88-4540-a5c9-9c6d19978537-kube-api-access-xc6gd\") pod \"1c4be727-3b88-4540-a5c9-9c6d19978537\" (UID: \"1c4be727-3b88-4540-a5c9-9c6d19978537\") " Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.951263 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mqt4\" (UniqueName: \"kubernetes.io/projected/41028362-8aac-4ba3-a617-c7b65e42a368-kube-api-access-5mqt4\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.951280 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.951293 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41028362-8aac-4ba3-a617-c7b65e42a368-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.952925 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-utilities" (OuterVolumeSpecName: "utilities") pod "1c4be727-3b88-4540-a5c9-9c6d19978537" (UID: "1c4be727-3b88-4540-a5c9-9c6d19978537"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:42:57 crc kubenswrapper[4772]: I1122 10:42:57.957190 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c4be727-3b88-4540-a5c9-9c6d19978537-kube-api-access-xc6gd" (OuterVolumeSpecName: "kube-api-access-xc6gd") pod "1c4be727-3b88-4540-a5c9-9c6d19978537" (UID: "1c4be727-3b88-4540-a5c9-9c6d19978537"). InnerVolumeSpecName "kube-api-access-xc6gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.052985 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.053037 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc6gd\" (UniqueName: \"kubernetes.io/projected/1c4be727-3b88-4540-a5c9-9c6d19978537-kube-api-access-xc6gd\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.073443 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m47st" event={"ID":"1c4be727-3b88-4540-a5c9-9c6d19978537","Type":"ContainerDied","Data":"4ef65a561fee431e2c2537479fbc7f66131ced3191e492c61b8c5f2515b8815e"} Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.073474 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m47st" Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.076883 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-stqj9" event={"ID":"41028362-8aac-4ba3-a617-c7b65e42a368","Type":"ContainerDied","Data":"410775998cef2552d1413f489d601fe684816639a3890c20b64f009421ea45b2"} Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.076989 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-stqj9" Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.110393 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqj9"] Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.115563 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-stqj9"] Nov 22 10:42:58 crc kubenswrapper[4772]: I1122 10:42:58.980375 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c4be727-3b88-4540-a5c9-9c6d19978537" (UID: "1c4be727-3b88-4540-a5c9-9c6d19978537"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:42:59 crc kubenswrapper[4772]: I1122 10:42:59.065724 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c4be727-3b88-4540-a5c9-9c6d19978537-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:42:59 crc kubenswrapper[4772]: I1122 10:42:59.313113 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m47st"] Nov 22 10:42:59 crc kubenswrapper[4772]: I1122 10:42:59.318325 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m47st"] Nov 22 10:42:59 crc kubenswrapper[4772]: I1122 10:42:59.429041 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" path="/var/lib/kubelet/pods/1c4be727-3b88-4540-a5c9-9c6d19978537/volumes" Nov 22 10:42:59 crc kubenswrapper[4772]: I1122 10:42:59.430219 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" path="/var/lib/kubelet/pods/41028362-8aac-4ba3-a617-c7b65e42a368/volumes" Nov 22 10:43:00 crc kubenswrapper[4772]: I1122 10:43:00.947085 4772 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 22 10:43:02 crc kubenswrapper[4772]: I1122 10:43:02.771535 4772 scope.go:117] "RemoveContainer" containerID="0542181219c6f2a5d4b663f2c6452f018a9b2b0a88700dfe00a3c32e993afff8" Nov 22 10:43:05 crc kubenswrapper[4772]: I1122 10:43:05.030092 4772 scope.go:117] "RemoveContainer" containerID="f1e124f295076bc82c67d7c9d255f62eb37f8ddd9fe864a65889afa0cc649a9a" Nov 22 10:43:13 crc kubenswrapper[4772]: I1122 10:43:13.035457 4772 scope.go:117] "RemoveContainer" containerID="ee0c9d6d2827e5c6c009743c9523a65a454c4269967ce2bcd99355239cb83b78" Nov 22 10:43:13 crc kubenswrapper[4772]: I1122 10:43:13.092704 4772 scope.go:117] "RemoveContainer" containerID="5e1bcdc41487c568741912cf484256c0dd1e77ca717d02b24aa015064d2828fa" Nov 22 10:43:13 crc kubenswrapper[4772]: I1122 10:43:13.144426 4772 scope.go:117] "RemoveContainer" containerID="0796d9ea1c6d215ac173fe52d71dde2a5ae6b773618d9488d5cd8349568dedaa" Nov 22 10:43:13 crc kubenswrapper[4772]: I1122 10:43:13.177854 4772 scope.go:117] "RemoveContainer" containerID="32f2dd9898fcf3a2dc5bc5847a40c0170068518df1b14c9ae39f6dee3d21195b" Nov 22 10:43:13 crc kubenswrapper[4772]: I1122 10:43:13.223960 4772 scope.go:117] "RemoveContainer" containerID="c6ddaf123f47348fd0a1f5af52621dbb76c51e49a786c56a983b78b4dde88763" Nov 22 10:43:13 crc kubenswrapper[4772]: I1122 10:43:13.243487 4772 scope.go:117] "RemoveContainer" containerID="810c0dae5107811965fde58c06cf3e9f040f08d959738f17ec0ec1f4a24a044a" Nov 22 10:43:15 crc kubenswrapper[4772]: I1122 10:43:15.223325 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9vf" event={"ID":"c6f45ecf-368b-4e09-83d6-c0620de2c97e","Type":"ContainerStarted","Data":"b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1"} Nov 22 10:43:16 crc kubenswrapper[4772]: I1122 10:43:16.232553 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9d42" event={"ID":"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9","Type":"ContainerStarted","Data":"119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547"} Nov 22 10:43:17 crc kubenswrapper[4772]: I1122 10:43:17.241864 4772 generic.go:334] "Generic (PLEG): container finished" podID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerID="b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1" exitCode=0 Nov 22 10:43:17 crc kubenswrapper[4772]: I1122 10:43:17.241944 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9vf" event={"ID":"c6f45ecf-368b-4e09-83d6-c0620de2c97e","Type":"ContainerDied","Data":"b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1"} Nov 22 10:43:17 crc kubenswrapper[4772]: I1122 10:43:17.246724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdkbm" event={"ID":"a35d30b7-cf6e-4d08-add4-2c9b27342e5d","Type":"ContainerStarted","Data":"7ea45cfff6226f323b3bcf9873eed10e2ad33424b589d4af6e7c5cb28e4513b3"} Nov 22 10:43:17 crc kubenswrapper[4772]: I1122 10:43:17.252322 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9d42" event={"ID":"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9","Type":"ContainerDied","Data":"119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547"} Nov 22 10:43:17 crc kubenswrapper[4772]: I1122 10:43:17.252221 4772 generic.go:334] "Generic (PLEG): container finished" podID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerID="119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547" exitCode=0 Nov 22 10:43:17 crc kubenswrapper[4772]: I1122 10:43:17.253964 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59qml" event={"ID":"64157fec-2673-403c-96c5-2cbaf3ca17a2","Type":"ContainerStarted","Data":"be806e1ca01b4a52aa6612da85f80b0e2ec11cd6840f5426bc17d1eab19b344c"} Nov 22 10:43:18 crc kubenswrapper[4772]: I1122 10:43:18.262995 4772 generic.go:334] "Generic (PLEG): container finished" podID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerID="7ea45cfff6226f323b3bcf9873eed10e2ad33424b589d4af6e7c5cb28e4513b3" exitCode=0 Nov 22 10:43:18 crc kubenswrapper[4772]: I1122 10:43:18.263093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdkbm" event={"ID":"a35d30b7-cf6e-4d08-add4-2c9b27342e5d","Type":"ContainerDied","Data":"7ea45cfff6226f323b3bcf9873eed10e2ad33424b589d4af6e7c5cb28e4513b3"} Nov 22 10:43:18 crc kubenswrapper[4772]: I1122 10:43:18.265068 4772 generic.go:334] "Generic (PLEG): container finished" podID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerID="be806e1ca01b4a52aa6612da85f80b0e2ec11cd6840f5426bc17d1eab19b344c" exitCode=0 Nov 22 10:43:18 crc kubenswrapper[4772]: I1122 10:43:18.265130 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59qml" event={"ID":"64157fec-2673-403c-96c5-2cbaf3ca17a2","Type":"ContainerDied","Data":"be806e1ca01b4a52aa6612da85f80b0e2ec11cd6840f5426bc17d1eab19b344c"} Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.497518 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59qml" event={"ID":"64157fec-2673-403c-96c5-2cbaf3ca17a2","Type":"ContainerStarted","Data":"f30c8925c88996ed3e94b67fccc47e3ccfc437a3a27224216c8e6c469e941186"} Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.500207 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9vf" event={"ID":"c6f45ecf-368b-4e09-83d6-c0620de2c97e","Type":"ContainerStarted","Data":"533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95"} Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.502798 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdkbm" event={"ID":"a35d30b7-cf6e-4d08-add4-2c9b27342e5d","Type":"ContainerStarted","Data":"dd470460ea9ced1cf2908cb12b1399fd361b1c8d96038269632391ecdd566672"} Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.505245 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9d42" event={"ID":"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9","Type":"ContainerStarted","Data":"b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e"} Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.528466 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-59qml" podStartSLOduration=4.060381797 podStartE2EDuration="2m56.52844929s" podCreationTimestamp="2025-11-22 10:40:49 +0000 UTC" firstStartedPulling="2025-11-22 10:40:51.842465291 +0000 UTC m=+172.081909785" lastFinishedPulling="2025-11-22 10:43:44.310532734 +0000 UTC m=+344.549977278" observedRunningTime="2025-11-22 10:43:45.526563652 +0000 UTC m=+345.766008156" watchObservedRunningTime="2025-11-22 10:43:45.52844929 +0000 UTC m=+345.767893784" Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.636548 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hq9vf" podStartSLOduration=11.316772534 podStartE2EDuration="2m53.636530476s" podCreationTimestamp="2025-11-22 10:40:52 +0000 UTC" firstStartedPulling="2025-11-22 10:40:55.029460949 +0000 UTC m=+175.268905443" lastFinishedPulling="2025-11-22 10:43:37.349218841 +0000 UTC m=+337.588663385" observedRunningTime="2025-11-22 10:43:45.604173385 +0000 UTC m=+345.843617879" watchObservedRunningTime="2025-11-22 10:43:45.636530476 +0000 UTC m=+345.875974970" Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.637170 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xdkbm" podStartSLOduration=4.268463358 podStartE2EDuration="2m54.637165852s" podCreationTimestamp="2025-11-22 10:40:51 +0000 UTC" firstStartedPulling="2025-11-22 10:40:53.956078656 +0000 UTC m=+174.195523150" lastFinishedPulling="2025-11-22 10:43:44.32478111 +0000 UTC m=+344.564225644" observedRunningTime="2025-11-22 10:43:45.633254722 +0000 UTC m=+345.872699216" watchObservedRunningTime="2025-11-22 10:43:45.637165852 +0000 UTC m=+345.876610346" Nov 22 10:43:45 crc kubenswrapper[4772]: I1122 10:43:45.655496 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9d42" podStartSLOduration=11.917774394 podStartE2EDuration="2m52.655477092s" podCreationTimestamp="2025-11-22 10:40:53 +0000 UTC" firstStartedPulling="2025-11-22 10:40:55.092090013 +0000 UTC m=+175.331534507" lastFinishedPulling="2025-11-22 10:43:35.829792701 +0000 UTC m=+336.069237205" observedRunningTime="2025-11-22 10:43:45.653496521 +0000 UTC m=+345.892941015" watchObservedRunningTime="2025-11-22 10:43:45.655477092 +0000 UTC m=+345.894921586" Nov 22 10:43:50 crc kubenswrapper[4772]: I1122 10:43:50.304954 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:43:50 crc kubenswrapper[4772]: I1122 10:43:50.305850 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:43:50 crc kubenswrapper[4772]: I1122 10:43:50.343358 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:43:50 crc kubenswrapper[4772]: I1122 10:43:50.578553 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:43:52 crc kubenswrapper[4772]: I1122 10:43:52.162871 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:43:52 crc kubenswrapper[4772]: I1122 10:43:52.162964 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:43:52 crc kubenswrapper[4772]: I1122 10:43:52.225759 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:43:52 crc kubenswrapper[4772]: I1122 10:43:52.578735 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:43:53 crc kubenswrapper[4772]: I1122 10:43:53.161102 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:43:53 crc kubenswrapper[4772]: I1122 10:43:53.161648 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:43:53 crc kubenswrapper[4772]: I1122 10:43:53.199887 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:43:53 crc kubenswrapper[4772]: I1122 10:43:53.566792 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:43:53 crc kubenswrapper[4772]: I1122 10:43:53.566939 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:43:53 crc kubenswrapper[4772]: I1122 10:43:53.591238 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:43:53 crc kubenswrapper[4772]: I1122 10:43:53.620276 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:43:54 crc kubenswrapper[4772]: I1122 10:43:54.591544 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:43:55 crc kubenswrapper[4772]: I1122 10:43:55.571423 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9d42"] Nov 22 10:43:56 crc kubenswrapper[4772]: I1122 10:43:56.823542 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gdmn"] Nov 22 10:43:57 crc kubenswrapper[4772]: I1122 10:43:57.568629 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9d42" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="registry-server" containerID="cri-o://b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e" gracePeriod=2 Nov 22 10:43:57 crc kubenswrapper[4772]: I1122 10:43:57.975502 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.056600 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-utilities\") pod \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.056696 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tprwl\" (UniqueName: \"kubernetes.io/projected/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-kube-api-access-tprwl\") pod \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.056808 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-catalog-content\") pod \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\" (UID: \"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9\") " Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.057643 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-utilities" (OuterVolumeSpecName: "utilities") pod "dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" (UID: "dd2e49a9-b8b7-47f6-a489-1d5ac70105b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.064463 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-kube-api-access-tprwl" (OuterVolumeSpecName: "kube-api-access-tprwl") pod "dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" (UID: "dd2e49a9-b8b7-47f6-a489-1d5ac70105b9"). InnerVolumeSpecName "kube-api-access-tprwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.149006 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" (UID: "dd2e49a9-b8b7-47f6-a489-1d5ac70105b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.159796 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.160009 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tprwl\" (UniqueName: \"kubernetes.io/projected/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-kube-api-access-tprwl\") on node \"crc\" DevicePath \"\"" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.160110 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.576522 4772 generic.go:334] "Generic (PLEG): container finished" podID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerID="b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e" exitCode=0 Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.576641 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9d42" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.576638 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9d42" event={"ID":"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9","Type":"ContainerDied","Data":"b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e"} Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.577093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9d42" event={"ID":"dd2e49a9-b8b7-47f6-a489-1d5ac70105b9","Type":"ContainerDied","Data":"4ab6edda094cc58edea50d430c5d6b95e36ec94e1f5b917e4bf38e106f009ca6"} Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.577118 4772 scope.go:117] "RemoveContainer" containerID="b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.600989 4772 scope.go:117] "RemoveContainer" containerID="119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.623115 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9d42"] Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.627656 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9d42"] Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.641409 4772 scope.go:117] "RemoveContainer" containerID="3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.654686 4772 scope.go:117] "RemoveContainer" containerID="b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e" Nov 22 10:43:58 crc kubenswrapper[4772]: E1122 10:43:58.655210 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e\": container with ID starting with b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e not found: ID does not exist" containerID="b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.655248 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e"} err="failed to get container status \"b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e\": rpc error: code = NotFound desc = could not find container \"b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e\": container with ID starting with b010e63354c554558502cb6e76d09980534154df74e5a3e2cb0cfafcaa01959e not found: ID does not exist" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.655283 4772 scope.go:117] "RemoveContainer" containerID="119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547" Nov 22 10:43:58 crc kubenswrapper[4772]: E1122 10:43:58.655557 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547\": container with ID starting with 119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547 not found: ID does not exist" containerID="119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.655582 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547"} err="failed to get container status \"119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547\": rpc error: code = NotFound desc = could not find container \"119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547\": container with ID starting with 119042e0e3d1a478ef015b645be77f3b477c6238b354c7a19a8246e508454547 not found: ID does not exist" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.655596 4772 scope.go:117] "RemoveContainer" containerID="3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf" Nov 22 10:43:58 crc kubenswrapper[4772]: E1122 10:43:58.655964 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf\": container with ID starting with 3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf not found: ID does not exist" containerID="3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf" Nov 22 10:43:58 crc kubenswrapper[4772]: I1122 10:43:58.655988 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf"} err="failed to get container status \"3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf\": rpc error: code = NotFound desc = could not find container \"3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf\": container with ID starting with 3815937d5c4d0ef4a4b5e53d921a3fb8b425c30503f9ec3aec69daab86709bbf not found: ID does not exist" Nov 22 10:43:59 crc kubenswrapper[4772]: I1122 10:43:59.420191 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" path="/var/lib/kubelet/pods/dd2e49a9-b8b7-47f6-a489-1d5ac70105b9/volumes" Nov 22 10:44:21 crc kubenswrapper[4772]: I1122 10:44:21.875499 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerName="oauth-openshift" containerID="cri-o://a5aa90fe15fffa4257251c0fe68b579bea6ef7142980e92269afb5f6770f11c5" gracePeriod=15 Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.765110 4772 generic.go:334] "Generic (PLEG): container finished" podID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerID="a5aa90fe15fffa4257251c0fe68b579bea6ef7142980e92269afb5f6770f11c5" exitCode=0 Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.765336 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" event={"ID":"f529668b-54db-49e7-92cb-c3cf6b986dce","Type":"ContainerDied","Data":"a5aa90fe15fffa4257251c0fe68b579bea6ef7142980e92269afb5f6770f11c5"} Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.879386 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.916731 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5"] Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.918965 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerName="oauth-openshift" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.918992 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerName="oauth-openshift" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919002 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919008 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919021 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919027 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919036 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919061 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919070 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2" containerName="pruner" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919076 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2" containerName="pruner" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919088 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919095 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919105 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919111 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919119 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919125 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919135 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919141 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919147 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919153 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919161 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7566ec-f873-4243-94e6-fe7c38cc07ae" containerName="pruner" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919167 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7566ec-f873-4243-94e6-fe7c38cc07ae" containerName="pruner" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919175 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919182 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919187 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919194 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="extract-content" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919202 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919208 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: E1122 10:44:22.919214 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919221 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="extract-utilities" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919355 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4be727-3b88-4540-a5c9-9c6d19978537" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919364 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2e49a9-b8b7-47f6-a489-1d5ac70105b9" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919374 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="41028362-8aac-4ba3-a617-c7b65e42a368" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919384 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7566ec-f873-4243-94e6-fe7c38cc07ae" containerName="pruner" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919391 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99b0a8a-5c55-480c-8a9c-274f995658cb" containerName="registry-server" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919399 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c25bbc-1eb4-4d98-bb37-37a5871aa4f2" containerName="pruner" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919409 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" containerName="oauth-openshift" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.919823 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:22 crc kubenswrapper[4772]: I1122 10:44:22.928250 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5"] Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.023563 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-cliconfig\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.023612 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-dir\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.023642 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-trusted-ca-bundle\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.023710 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-router-certs\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.023731 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.024546 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.023735 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-idp-0-file-data\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.024623 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-error\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.024650 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-login\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.024677 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.024946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht25z\" (UniqueName: \"kubernetes.io/projected/f529668b-54db-49e7-92cb-c3cf6b986dce-kube-api-access-ht25z\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.025009 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-policies\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.025499 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.025549 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-ocp-branding-template\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.025834 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-session\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.025864 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-provider-selection\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.025888 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-service-ca\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.025922 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-serving-cert\") pod \"f529668b-54db-49e7-92cb-c3cf6b986dce\" (UID: \"f529668b-54db-49e7-92cb-c3cf6b986dce\") " Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026033 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026077 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026105 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92734414-89d2-4d86-96ef-a2c832897096-audit-dir\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026126 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-router-certs\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026159 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-error\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026186 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026212 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mg9x\" (UniqueName: \"kubernetes.io/projected/92734414-89d2-4d86-96ef-a2c832897096-kube-api-access-4mg9x\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026234 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-service-ca\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026253 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-audit-policies\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026270 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-login\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026294 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-session\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026339 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026357 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026393 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026403 4772 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026412 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026423 4772 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.026875 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.030445 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.031304 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.035482 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.035712 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f529668b-54db-49e7-92cb-c3cf6b986dce-kube-api-access-ht25z" (OuterVolumeSpecName: "kube-api-access-ht25z") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "kube-api-access-ht25z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.036001 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.036377 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.036489 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.036783 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.040534 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f529668b-54db-49e7-92cb-c3cf6b986dce" (UID: "f529668b-54db-49e7-92cb-c3cf6b986dce"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.127868 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92734414-89d2-4d86-96ef-a2c832897096-audit-dir\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.127944 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-router-certs\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.127996 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-error\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128023 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128068 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mg9x\" (UniqueName: \"kubernetes.io/projected/92734414-89d2-4d86-96ef-a2c832897096-kube-api-access-4mg9x\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128059 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92734414-89d2-4d86-96ef-a2c832897096-audit-dir\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128087 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-service-ca\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128172 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-audit-policies\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128192 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-login\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128266 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-session\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128350 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128394 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128450 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128493 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128575 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128586 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128598 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128610 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128621 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128632 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128647 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128656 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128669 4772 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f529668b-54db-49e7-92cb-c3cf6b986dce-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128679 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht25z\" (UniqueName: \"kubernetes.io/projected/f529668b-54db-49e7-92cb-c3cf6b986dce-kube-api-access-ht25z\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.128924 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-service-ca\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.130757 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-audit-policies\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.131372 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.133207 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.133220 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.133381 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.133852 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-error\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.134083 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-router-certs\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.134415 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-session\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.135425 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.136770 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-user-template-login\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.140465 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92734414-89d2-4d86-96ef-a2c832897096-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.160976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mg9x\" (UniqueName: \"kubernetes.io/projected/92734414-89d2-4d86-96ef-a2c832897096-kube-api-access-4mg9x\") pod \"oauth-openshift-55fc6d54cb-7cxr5\" (UID: \"92734414-89d2-4d86-96ef-a2c832897096\") " pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.242881 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.520780 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5"] Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.774085 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" event={"ID":"f529668b-54db-49e7-92cb-c3cf6b986dce","Type":"ContainerDied","Data":"c53df660dc1b150fb9c979f4ddfc73326bc33b73e195c108d35ec6d23671f732"} Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.774437 4772 scope.go:117] "RemoveContainer" containerID="a5aa90fe15fffa4257251c0fe68b579bea6ef7142980e92269afb5f6770f11c5" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.774184 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7gdmn" Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.775252 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" event={"ID":"92734414-89d2-4d86-96ef-a2c832897096","Type":"ContainerStarted","Data":"b7bada9139573eee89bb253762947de3d3f157faea6f11eefd4e495788fe6b3f"} Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.792087 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gdmn"] Nov 22 10:44:23 crc kubenswrapper[4772]: I1122 10:44:23.797341 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7gdmn"] Nov 22 10:44:24 crc kubenswrapper[4772]: I1122 10:44:24.782689 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" event={"ID":"92734414-89d2-4d86-96ef-a2c832897096","Type":"ContainerStarted","Data":"919cf8e2466d6ee5e38e82f2b734030d8d929d5d1dc566645371865d35708c8d"} Nov 22 10:44:24 crc kubenswrapper[4772]: I1122 10:44:24.783120 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:24 crc kubenswrapper[4772]: I1122 10:44:24.792012 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" Nov 22 10:44:24 crc kubenswrapper[4772]: I1122 10:44:24.817512 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55fc6d54cb-7cxr5" podStartSLOduration=28.817475994 podStartE2EDuration="28.817475994s" podCreationTimestamp="2025-11-22 10:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:44:24.808638037 +0000 UTC m=+385.048082551" watchObservedRunningTime="2025-11-22 10:44:24.817475994 +0000 UTC m=+385.056920518" Nov 22 10:44:25 crc kubenswrapper[4772]: I1122 10:44:25.422495 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f529668b-54db-49e7-92cb-c3cf6b986dce" path="/var/lib/kubelet/pods/f529668b-54db-49e7-92cb-c3cf6b986dce/volumes" Nov 22 10:44:31 crc kubenswrapper[4772]: I1122 10:44:31.533756 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:44:31 crc kubenswrapper[4772]: I1122 10:44:31.534348 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.747926 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5g5n6"] Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.749254 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5g5n6" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="registry-server" containerID="cri-o://44e4e2fa32c694449370ba547c0d7ec63af38d8b1aa4c942d75e8f0d9aa2b0e7" gracePeriod=30 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.765187 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-59qml"] Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.766018 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-59qml" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="registry-server" containerID="cri-o://f30c8925c88996ed3e94b67fccc47e3ccfc437a3a27224216c8e6c469e941186" gracePeriod=30 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.779592 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8z66s"] Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.780107 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" podUID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" containerName="marketplace-operator" containerID="cri-o://31bc211aa333eb3489d88b1c16a6e3e66fa4bae88bfd3fe01a1a1e0a32b7f263" gracePeriod=30 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.791630 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdkbm"] Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.792156 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xdkbm" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="registry-server" containerID="cri-o://dd470460ea9ced1cf2908cb12b1399fd361b1c8d96038269632391ecdd566672" gracePeriod=30 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.803001 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54pmg"] Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.804085 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.818135 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq9vf"] Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.818496 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hq9vf" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="registry-server" containerID="cri-o://533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95" gracePeriod=30 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.819965 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54pmg"] Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.925682 4772 generic.go:334] "Generic (PLEG): container finished" podID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerID="f30c8925c88996ed3e94b67fccc47e3ccfc437a3a27224216c8e6c469e941186" exitCode=0 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.925772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59qml" event={"ID":"64157fec-2673-403c-96c5-2cbaf3ca17a2","Type":"ContainerDied","Data":"f30c8925c88996ed3e94b67fccc47e3ccfc437a3a27224216c8e6c469e941186"} Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.927671 4772 generic.go:334] "Generic (PLEG): container finished" podID="59d55861-971b-404f-9926-4d41f07f0880" containerID="44e4e2fa32c694449370ba547c0d7ec63af38d8b1aa4c942d75e8f0d9aa2b0e7" exitCode=0 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.927710 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5g5n6" event={"ID":"59d55861-971b-404f-9926-4d41f07f0880","Type":"ContainerDied","Data":"44e4e2fa32c694449370ba547c0d7ec63af38d8b1aa4c942d75e8f0d9aa2b0e7"} Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.931963 4772 generic.go:334] "Generic (PLEG): container finished" podID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerID="dd470460ea9ced1cf2908cb12b1399fd361b1c8d96038269632391ecdd566672" exitCode=0 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.932031 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdkbm" event={"ID":"a35d30b7-cf6e-4d08-add4-2c9b27342e5d","Type":"ContainerDied","Data":"dd470460ea9ced1cf2908cb12b1399fd361b1c8d96038269632391ecdd566672"} Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.937279 4772 generic.go:334] "Generic (PLEG): container finished" podID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" containerID="31bc211aa333eb3489d88b1c16a6e3e66fa4bae88bfd3fe01a1a1e0a32b7f263" exitCode=0 Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.937373 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" event={"ID":"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4","Type":"ContainerDied","Data":"31bc211aa333eb3489d88b1c16a6e3e66fa4bae88bfd3fe01a1a1e0a32b7f263"} Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.976672 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ea14368-36cf-45d4-b161-1f4412f8d675-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.976734 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9ea14368-36cf-45d4-b161-1f4412f8d675-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:44 crc kubenswrapper[4772]: I1122 10:44:44.976769 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrhp\" (UniqueName: \"kubernetes.io/projected/9ea14368-36cf-45d4-b161-1f4412f8d675-kube-api-access-jbrhp\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.078102 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ea14368-36cf-45d4-b161-1f4412f8d675-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.078158 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9ea14368-36cf-45d4-b161-1f4412f8d675-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.078189 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrhp\" (UniqueName: \"kubernetes.io/projected/9ea14368-36cf-45d4-b161-1f4412f8d675-kube-api-access-jbrhp\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.079557 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ea14368-36cf-45d4-b161-1f4412f8d675-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.089120 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9ea14368-36cf-45d4-b161-1f4412f8d675-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.101579 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrhp\" (UniqueName: \"kubernetes.io/projected/9ea14368-36cf-45d4-b161-1f4412f8d675-kube-api-access-jbrhp\") pod \"marketplace-operator-79b997595-54pmg\" (UID: \"9ea14368-36cf-45d4-b161-1f4412f8d675\") " pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.212976 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.222835 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.241483 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.241977 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.279878 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.291720 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-utilities\") pod \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383417 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vpjx\" (UniqueName: \"kubernetes.io/projected/c6f45ecf-368b-4e09-83d6-c0620de2c97e-kube-api-access-6vpjx\") pod \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383473 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkf2f\" (UniqueName: \"kubernetes.io/projected/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-kube-api-access-jkf2f\") pod \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383499 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngkkc\" (UniqueName: \"kubernetes.io/projected/59d55861-971b-404f-9926-4d41f07f0880-kube-api-access-ngkkc\") pod \"59d55861-971b-404f-9926-4d41f07f0880\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383525 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-trusted-ca\") pod \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383548 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-catalog-content\") pod \"59d55861-971b-404f-9926-4d41f07f0880\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383590 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64fhp\" (UniqueName: \"kubernetes.io/projected/64157fec-2673-403c-96c5-2cbaf3ca17a2-kube-api-access-64fhp\") pod \"64157fec-2673-403c-96c5-2cbaf3ca17a2\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383615 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-utilities\") pod \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383648 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-catalog-content\") pod \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383672 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-utilities\") pod \"64157fec-2673-403c-96c5-2cbaf3ca17a2\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383708 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-catalog-content\") pod \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\" (UID: \"c6f45ecf-368b-4e09-83d6-c0620de2c97e\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383748 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-utilities\") pod \"59d55861-971b-404f-9926-4d41f07f0880\" (UID: \"59d55861-971b-404f-9926-4d41f07f0880\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383782 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-operator-metrics\") pod \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\" (UID: \"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383804 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krpz6\" (UniqueName: \"kubernetes.io/projected/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-kube-api-access-krpz6\") pod \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\" (UID: \"a35d30b7-cf6e-4d08-add4-2c9b27342e5d\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.383833 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-catalog-content\") pod \"64157fec-2673-403c-96c5-2cbaf3ca17a2\" (UID: \"64157fec-2673-403c-96c5-2cbaf3ca17a2\") " Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.386084 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" (UID: "d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.386368 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-utilities" (OuterVolumeSpecName: "utilities") pod "59d55861-971b-404f-9926-4d41f07f0880" (UID: "59d55861-971b-404f-9926-4d41f07f0880"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.386937 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-utilities" (OuterVolumeSpecName: "utilities") pod "64157fec-2673-403c-96c5-2cbaf3ca17a2" (UID: "64157fec-2673-403c-96c5-2cbaf3ca17a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.387308 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-utilities" (OuterVolumeSpecName: "utilities") pod "c6f45ecf-368b-4e09-83d6-c0620de2c97e" (UID: "c6f45ecf-368b-4e09-83d6-c0620de2c97e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.388245 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-utilities" (OuterVolumeSpecName: "utilities") pod "a35d30b7-cf6e-4d08-add4-2c9b27342e5d" (UID: "a35d30b7-cf6e-4d08-add4-2c9b27342e5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.392453 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" (UID: "d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.392527 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-kube-api-access-jkf2f" (OuterVolumeSpecName: "kube-api-access-jkf2f") pod "d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" (UID: "d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4"). InnerVolumeSpecName "kube-api-access-jkf2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.393168 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64157fec-2673-403c-96c5-2cbaf3ca17a2-kube-api-access-64fhp" (OuterVolumeSpecName: "kube-api-access-64fhp") pod "64157fec-2673-403c-96c5-2cbaf3ca17a2" (UID: "64157fec-2673-403c-96c5-2cbaf3ca17a2"). InnerVolumeSpecName "kube-api-access-64fhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.393256 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59d55861-971b-404f-9926-4d41f07f0880-kube-api-access-ngkkc" (OuterVolumeSpecName: "kube-api-access-ngkkc") pod "59d55861-971b-404f-9926-4d41f07f0880" (UID: "59d55861-971b-404f-9926-4d41f07f0880"). InnerVolumeSpecName "kube-api-access-ngkkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.395453 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f45ecf-368b-4e09-83d6-c0620de2c97e-kube-api-access-6vpjx" (OuterVolumeSpecName: "kube-api-access-6vpjx") pod "c6f45ecf-368b-4e09-83d6-c0620de2c97e" (UID: "c6f45ecf-368b-4e09-83d6-c0620de2c97e"). InnerVolumeSpecName "kube-api-access-6vpjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.396958 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-kube-api-access-krpz6" (OuterVolumeSpecName: "kube-api-access-krpz6") pod "a35d30b7-cf6e-4d08-add4-2c9b27342e5d" (UID: "a35d30b7-cf6e-4d08-add4-2c9b27342e5d"). InnerVolumeSpecName "kube-api-access-krpz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.406009 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a35d30b7-cf6e-4d08-add4-2c9b27342e5d" (UID: "a35d30b7-cf6e-4d08-add4-2c9b27342e5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.438202 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59d55861-971b-404f-9926-4d41f07f0880" (UID: "59d55861-971b-404f-9926-4d41f07f0880"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.471529 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64157fec-2673-403c-96c5-2cbaf3ca17a2" (UID: "64157fec-2673-403c-96c5-2cbaf3ca17a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486134 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486168 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486181 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krpz6\" (UniqueName: \"kubernetes.io/projected/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-kube-api-access-krpz6\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486192 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486202 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486211 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vpjx\" (UniqueName: \"kubernetes.io/projected/c6f45ecf-368b-4e09-83d6-c0620de2c97e-kube-api-access-6vpjx\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486220 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkf2f\" (UniqueName: \"kubernetes.io/projected/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-kube-api-access-jkf2f\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486229 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngkkc\" (UniqueName: \"kubernetes.io/projected/59d55861-971b-404f-9926-4d41f07f0880-kube-api-access-ngkkc\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486238 4772 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486246 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d55861-971b-404f-9926-4d41f07f0880-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486257 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64fhp\" (UniqueName: \"kubernetes.io/projected/64157fec-2673-403c-96c5-2cbaf3ca17a2-kube-api-access-64fhp\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486264 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486276 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a35d30b7-cf6e-4d08-add4-2c9b27342e5d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.486287 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64157fec-2673-403c-96c5-2cbaf3ca17a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.515039 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6f45ecf-368b-4e09-83d6-c0620de2c97e" (UID: "c6f45ecf-368b-4e09-83d6-c0620de2c97e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.529788 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-54pmg"] Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.587088 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6f45ecf-368b-4e09-83d6-c0620de2c97e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.946662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59qml" event={"ID":"64157fec-2673-403c-96c5-2cbaf3ca17a2","Type":"ContainerDied","Data":"53172f132b73a25d4f7430840fc18df3ae77ccdd334f4860336adda997f5a507"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.947361 4772 scope.go:117] "RemoveContainer" containerID="f30c8925c88996ed3e94b67fccc47e3ccfc437a3a27224216c8e6c469e941186" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.947240 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59qml" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.948407 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" event={"ID":"9ea14368-36cf-45d4-b161-1f4412f8d675","Type":"ContainerStarted","Data":"bce4f3f227d7227b979254873cb840cabbe0d1a3c60980393465ee4f144c32b1"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.949022 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.949105 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" event={"ID":"9ea14368-36cf-45d4-b161-1f4412f8d675","Type":"ContainerStarted","Data":"31ad5d467f623e50f52db6e2607812c8fdd9252ead3b6a555891759fdedf4d3a"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.951097 4772 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-54pmg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.951168 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" podUID="9ea14368-36cf-45d4-b161-1f4412f8d675" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.951191 4772 generic.go:334] "Generic (PLEG): container finished" podID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerID="533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95" exitCode=0 Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.951235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9vf" event={"ID":"c6f45ecf-368b-4e09-83d6-c0620de2c97e","Type":"ContainerDied","Data":"533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.951282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq9vf" event={"ID":"c6f45ecf-368b-4e09-83d6-c0620de2c97e","Type":"ContainerDied","Data":"8b2e85daa8368470d0a4a3445ae05f2084c2239362292664223a41d8e4adcb72"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.951312 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq9vf" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.953201 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5g5n6" event={"ID":"59d55861-971b-404f-9926-4d41f07f0880","Type":"ContainerDied","Data":"cb8e89aa94ea0510aff5224ae0ceacf7c24dc1c5889fd40fd7bd1f2b3ba7d660"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.953324 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5g5n6" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.962672 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdkbm" event={"ID":"a35d30b7-cf6e-4d08-add4-2c9b27342e5d","Type":"ContainerDied","Data":"8f3c02f0ca05b0ae35a598dfd7604fe53b83d661507aa350801b0ca1cd0481bd"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.962783 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdkbm" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.967554 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" event={"ID":"d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4","Type":"ContainerDied","Data":"2f23a539d93d656adb1a67b8fae22cae58f0f8301b8582eebe33a9bcc5426e0c"} Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.967672 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8z66s" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.978766 4772 scope.go:117] "RemoveContainer" containerID="be806e1ca01b4a52aa6612da85f80b0e2ec11cd6840f5426bc17d1eab19b344c" Nov 22 10:44:45 crc kubenswrapper[4772]: I1122 10:44:45.983977 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" podStartSLOduration=1.983951378 podStartE2EDuration="1.983951378s" podCreationTimestamp="2025-11-22 10:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:44:45.983514416 +0000 UTC m=+406.222958930" watchObservedRunningTime="2025-11-22 10:44:45.983951378 +0000 UTC m=+406.223395872" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.011134 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5g5n6"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.016296 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5g5n6"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.017158 4772 scope.go:117] "RemoveContainer" containerID="b571c7a729c692ff0eb7ffbca2900a511d414d4b1aaca7b74308ccdb167978a7" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.038567 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-59qml"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.042757 4772 scope.go:117] "RemoveContainer" containerID="533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.047349 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-59qml"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.062402 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq9vf"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.063941 4772 scope.go:117] "RemoveContainer" containerID="b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.073193 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hq9vf"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.077286 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdkbm"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.085668 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdkbm"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.089097 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8z66s"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.090405 4772 scope.go:117] "RemoveContainer" containerID="705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.093083 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8z66s"] Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.110558 4772 scope.go:117] "RemoveContainer" containerID="533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.111297 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95\": container with ID starting with 533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95 not found: ID does not exist" containerID="533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.111359 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95"} err="failed to get container status \"533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95\": rpc error: code = NotFound desc = could not find container \"533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95\": container with ID starting with 533280a02ffb7173a849e71aacd57fafc6d05b593c31d49eb4a340844be51f95 not found: ID does not exist" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.111427 4772 scope.go:117] "RemoveContainer" containerID="b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.111825 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1\": container with ID starting with b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1 not found: ID does not exist" containerID="b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.111878 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1"} err="failed to get container status \"b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1\": rpc error: code = NotFound desc = could not find container \"b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1\": container with ID starting with b8f85d91e08bd34bd86bfe8fd7db1b12c3a26d23ae0f21d6a1fa009b818b12d1 not found: ID does not exist" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.111912 4772 scope.go:117] "RemoveContainer" containerID="705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.112198 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef\": container with ID starting with 705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef not found: ID does not exist" containerID="705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.112225 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef"} err="failed to get container status \"705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef\": rpc error: code = NotFound desc = could not find container \"705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef\": container with ID starting with 705c838f2ef93bc89f7081f4011de870df5a0edd9f3572cee24cd0348ab27eef not found: ID does not exist" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.112242 4772 scope.go:117] "RemoveContainer" containerID="44e4e2fa32c694449370ba547c0d7ec63af38d8b1aa4c942d75e8f0d9aa2b0e7" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.134700 4772 scope.go:117] "RemoveContainer" containerID="e78560ec0873907f668dcd71dd30cdf4ac1a4ead0a179fb5d7f53db2f3859bb0" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.155087 4772 scope.go:117] "RemoveContainer" containerID="a68004fa3bec83ed4795eff2db138d4f271a8fb34142a18626a7282048258f1f" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.171748 4772 scope.go:117] "RemoveContainer" containerID="dd470460ea9ced1cf2908cb12b1399fd361b1c8d96038269632391ecdd566672" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.193374 4772 scope.go:117] "RemoveContainer" containerID="7ea45cfff6226f323b3bcf9873eed10e2ad33424b589d4af6e7c5cb28e4513b3" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.209886 4772 scope.go:117] "RemoveContainer" containerID="a4704c674b990f7cc91d8e9bcbae46f1c6f9e8012792214537b34df310f3fa60" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.227701 4772 scope.go:117] "RemoveContainer" containerID="31bc211aa333eb3489d88b1c16a6e3e66fa4bae88bfd3fe01a1a1e0a32b7f263" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.970966 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bws7w"] Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975615 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975682 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975703 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" containerName="marketplace-operator" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975714 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" containerName="marketplace-operator" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975738 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975749 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975767 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975776 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975796 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975810 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975833 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975842 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975855 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975864 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975884 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975894 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975915 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975924 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975945 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975954 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.975972 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.975984 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.976005 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.976014 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="extract-utilities" Nov 22 10:44:46 crc kubenswrapper[4772]: E1122 10:44:46.976025 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.976036 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="extract-content" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.976473 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.976491 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.976515 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d55861-971b-404f-9926-4d41f07f0880" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.976531 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" containerName="registry-server" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.976551 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" containerName="marketplace-operator" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.979606 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.983819 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 10:44:46 crc kubenswrapper[4772]: I1122 10:44:46.989200 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-54pmg" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.010286 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bws7w"] Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.110017 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fe9c3a-44e0-4f93-8911-383910a8854b-catalog-content\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.110446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fe9c3a-44e0-4f93-8911-383910a8854b-utilities\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.110587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwm8\" (UniqueName: \"kubernetes.io/projected/c5fe9c3a-44e0-4f93-8911-383910a8854b-kube-api-access-rbwm8\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.172370 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v2wk4"] Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.174540 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.176527 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.187109 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2wk4"] Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.212179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fe9c3a-44e0-4f93-8911-383910a8854b-utilities\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.212235 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwm8\" (UniqueName: \"kubernetes.io/projected/c5fe9c3a-44e0-4f93-8911-383910a8854b-kube-api-access-rbwm8\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.212267 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fe9c3a-44e0-4f93-8911-383910a8854b-catalog-content\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.212929 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fe9c3a-44e0-4f93-8911-383910a8854b-catalog-content\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.213076 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fe9c3a-44e0-4f93-8911-383910a8854b-utilities\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.232524 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwm8\" (UniqueName: \"kubernetes.io/projected/c5fe9c3a-44e0-4f93-8911-383910a8854b-kube-api-access-rbwm8\") pod \"redhat-marketplace-bws7w\" (UID: \"c5fe9c3a-44e0-4f93-8911-383910a8854b\") " pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.301694 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.313796 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe350605-8cb5-4937-9362-5c2ad43660c3-catalog-content\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.313846 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8djp\" (UniqueName: \"kubernetes.io/projected/fe350605-8cb5-4937-9362-5c2ad43660c3-kube-api-access-j8djp\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.313917 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe350605-8cb5-4937-9362-5c2ad43660c3-utilities\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.415504 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe350605-8cb5-4937-9362-5c2ad43660c3-catalog-content\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.416023 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8djp\" (UniqueName: \"kubernetes.io/projected/fe350605-8cb5-4937-9362-5c2ad43660c3-kube-api-access-j8djp\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.416117 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe350605-8cb5-4937-9362-5c2ad43660c3-utilities\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.416556 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe350605-8cb5-4937-9362-5c2ad43660c3-utilities\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.416617 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe350605-8cb5-4937-9362-5c2ad43660c3-catalog-content\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.422429 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59d55861-971b-404f-9926-4d41f07f0880" path="/var/lib/kubelet/pods/59d55861-971b-404f-9926-4d41f07f0880/volumes" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.425260 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64157fec-2673-403c-96c5-2cbaf3ca17a2" path="/var/lib/kubelet/pods/64157fec-2673-403c-96c5-2cbaf3ca17a2/volumes" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.426278 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35d30b7-cf6e-4d08-add4-2c9b27342e5d" path="/var/lib/kubelet/pods/a35d30b7-cf6e-4d08-add4-2c9b27342e5d/volumes" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.430518 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f45ecf-368b-4e09-83d6-c0620de2c97e" path="/var/lib/kubelet/pods/c6f45ecf-368b-4e09-83d6-c0620de2c97e/volumes" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.431222 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4" path="/var/lib/kubelet/pods/d1d4b5fd-7de5-410f-b913-6fbb8e1c09b4/volumes" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.435150 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8djp\" (UniqueName: \"kubernetes.io/projected/fe350605-8cb5-4937-9362-5c2ad43660c3-kube-api-access-j8djp\") pod \"certified-operators-v2wk4\" (UID: \"fe350605-8cb5-4937-9362-5c2ad43660c3\") " pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.497742 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.529872 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bws7w"] Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.920371 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2wk4"] Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.991033 4772 generic.go:334] "Generic (PLEG): container finished" podID="c5fe9c3a-44e0-4f93-8911-383910a8854b" containerID="b2d46a2414fbe52356c2232595ca2a5b46cbfc5e746f13be0b297bcfab078d5f" exitCode=0 Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.991094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bws7w" event={"ID":"c5fe9c3a-44e0-4f93-8911-383910a8854b","Type":"ContainerDied","Data":"b2d46a2414fbe52356c2232595ca2a5b46cbfc5e746f13be0b297bcfab078d5f"} Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.991141 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bws7w" event={"ID":"c5fe9c3a-44e0-4f93-8911-383910a8854b","Type":"ContainerStarted","Data":"dfaeaa92124767de5dacc1326f0400148eb80ab19ebac947de1cd1e139a9c60b"} Nov 22 10:44:47 crc kubenswrapper[4772]: I1122 10:44:47.992294 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2wk4" event={"ID":"fe350605-8cb5-4937-9362-5c2ad43660c3","Type":"ContainerStarted","Data":"74c5457c6bb07e34b0ec2ffc5d63780155145d2c22c2950d4209e55f32be8a78"} Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.002217 4772 generic.go:334] "Generic (PLEG): container finished" podID="c5fe9c3a-44e0-4f93-8911-383910a8854b" containerID="9165d8128b5a64ad64807e701b25fd3ebad59266f19725c59efa4d9c41063e07" exitCode=0 Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.002291 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bws7w" event={"ID":"c5fe9c3a-44e0-4f93-8911-383910a8854b","Type":"ContainerDied","Data":"9165d8128b5a64ad64807e701b25fd3ebad59266f19725c59efa4d9c41063e07"} Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.004878 4772 generic.go:334] "Generic (PLEG): container finished" podID="fe350605-8cb5-4937-9362-5c2ad43660c3" containerID="1e2f2245a28f8a6e269a1858d89f761038bce1b5dcb4b5afe796edbe7c76ee5a" exitCode=0 Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.004967 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2wk4" event={"ID":"fe350605-8cb5-4937-9362-5c2ad43660c3","Type":"ContainerDied","Data":"1e2f2245a28f8a6e269a1858d89f761038bce1b5dcb4b5afe796edbe7c76ee5a"} Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.367993 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9d7hb"] Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.369835 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.372958 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.377659 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9d7hb"] Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.443651 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb106d85-8cbf-4ca1-ac98-496f521882c2-utilities\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.443720 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb106d85-8cbf-4ca1-ac98-496f521882c2-catalog-content\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.443747 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87t7s\" (UniqueName: \"kubernetes.io/projected/bb106d85-8cbf-4ca1-ac98-496f521882c2-kube-api-access-87t7s\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.545076 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb106d85-8cbf-4ca1-ac98-496f521882c2-utilities\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.545293 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb106d85-8cbf-4ca1-ac98-496f521882c2-catalog-content\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.546210 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb106d85-8cbf-4ca1-ac98-496f521882c2-utilities\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.546487 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb106d85-8cbf-4ca1-ac98-496f521882c2-catalog-content\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.545331 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87t7s\" (UniqueName: \"kubernetes.io/projected/bb106d85-8cbf-4ca1-ac98-496f521882c2-kube-api-access-87t7s\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.565228 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqg5j"] Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.566906 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.569437 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.580124 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqg5j"] Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.580361 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87t7s\" (UniqueName: \"kubernetes.io/projected/bb106d85-8cbf-4ca1-ac98-496f521882c2-kube-api-access-87t7s\") pod \"redhat-operators-9d7hb\" (UID: \"bb106d85-8cbf-4ca1-ac98-496f521882c2\") " pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.648304 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/f6441995-690d-46c4-bd8c-56fa50f781cb-kube-api-access-trq8c\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.648612 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6441995-690d-46c4-bd8c-56fa50f781cb-utilities\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.648768 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6441995-690d-46c4-bd8c-56fa50f781cb-catalog-content\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.749876 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/f6441995-690d-46c4-bd8c-56fa50f781cb-kube-api-access-trq8c\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.750666 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6441995-690d-46c4-bd8c-56fa50f781cb-utilities\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.750811 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6441995-690d-46c4-bd8c-56fa50f781cb-catalog-content\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.751802 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6441995-690d-46c4-bd8c-56fa50f781cb-utilities\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.752016 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.752574 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6441995-690d-46c4-bd8c-56fa50f781cb-catalog-content\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.782029 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trq8c\" (UniqueName: \"kubernetes.io/projected/f6441995-690d-46c4-bd8c-56fa50f781cb-kube-api-access-trq8c\") pod \"community-operators-hqg5j\" (UID: \"f6441995-690d-46c4-bd8c-56fa50f781cb\") " pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:49 crc kubenswrapper[4772]: I1122 10:44:49.973650 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:50 crc kubenswrapper[4772]: I1122 10:44:50.024288 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bws7w" event={"ID":"c5fe9c3a-44e0-4f93-8911-383910a8854b","Type":"ContainerStarted","Data":"77b387e02b0044d8fdc5b718cd26837d3c0cea77046fdc2b9aa150aca675d4eb"} Nov 22 10:44:50 crc kubenswrapper[4772]: I1122 10:44:50.026532 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9d7hb"] Nov 22 10:44:50 crc kubenswrapper[4772]: I1122 10:44:50.030737 4772 generic.go:334] "Generic (PLEG): container finished" podID="fe350605-8cb5-4937-9362-5c2ad43660c3" containerID="eddbd010cdaa70a266a4c349bcd686d1ff2b91dd4a539fc510d3baceeffdd840" exitCode=0 Nov 22 10:44:50 crc kubenswrapper[4772]: I1122 10:44:50.030820 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2wk4" event={"ID":"fe350605-8cb5-4937-9362-5c2ad43660c3","Type":"ContainerDied","Data":"eddbd010cdaa70a266a4c349bcd686d1ff2b91dd4a539fc510d3baceeffdd840"} Nov 22 10:44:50 crc kubenswrapper[4772]: I1122 10:44:50.050192 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bws7w" podStartSLOduration=2.536633421 podStartE2EDuration="4.050173641s" podCreationTimestamp="2025-11-22 10:44:46 +0000 UTC" firstStartedPulling="2025-11-22 10:44:47.992384476 +0000 UTC m=+408.231828980" lastFinishedPulling="2025-11-22 10:44:49.505924706 +0000 UTC m=+409.745369200" observedRunningTime="2025-11-22 10:44:50.046571548 +0000 UTC m=+410.286016052" watchObservedRunningTime="2025-11-22 10:44:50.050173641 +0000 UTC m=+410.289618135" Nov 22 10:44:50 crc kubenswrapper[4772]: I1122 10:44:50.219656 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqg5j"] Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.037532 4772 generic.go:334] "Generic (PLEG): container finished" podID="f6441995-690d-46c4-bd8c-56fa50f781cb" containerID="94c0361fee5dc493c4cf48789b3ed15700842798abe5cd5e2ba5f40d0994a551" exitCode=0 Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.037622 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqg5j" event={"ID":"f6441995-690d-46c4-bd8c-56fa50f781cb","Type":"ContainerDied","Data":"94c0361fee5dc493c4cf48789b3ed15700842798abe5cd5e2ba5f40d0994a551"} Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.038196 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqg5j" event={"ID":"f6441995-690d-46c4-bd8c-56fa50f781cb","Type":"ContainerStarted","Data":"033e7e7000c307230e464572a796e151d69d082462f7f71791c4c3abbdf74cae"} Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.055157 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2wk4" event={"ID":"fe350605-8cb5-4937-9362-5c2ad43660c3","Type":"ContainerStarted","Data":"32589086e17cb19ed03d7bf455df163651c6eb9eae43956e761e2d4686d852ce"} Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.070205 4772 generic.go:334] "Generic (PLEG): container finished" podID="bb106d85-8cbf-4ca1-ac98-496f521882c2" containerID="ddbb4a6c4583f3a27de2cb8bc2ccf482cc633b5536654e7ea13fdd5ae5d7e144" exitCode=0 Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.070308 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d7hb" event={"ID":"bb106d85-8cbf-4ca1-ac98-496f521882c2","Type":"ContainerDied","Data":"ddbb4a6c4583f3a27de2cb8bc2ccf482cc633b5536654e7ea13fdd5ae5d7e144"} Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.070364 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d7hb" event={"ID":"bb106d85-8cbf-4ca1-ac98-496f521882c2","Type":"ContainerStarted","Data":"8fa95a4f4b20a0699be663acec682e999e377331c8c428063d3c47a306cc7320"} Nov 22 10:44:51 crc kubenswrapper[4772]: I1122 10:44:51.082154 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v2wk4" podStartSLOduration=2.646377773 podStartE2EDuration="4.082128207s" podCreationTimestamp="2025-11-22 10:44:47 +0000 UTC" firstStartedPulling="2025-11-22 10:44:49.006096703 +0000 UTC m=+409.245541197" lastFinishedPulling="2025-11-22 10:44:50.441847137 +0000 UTC m=+410.681291631" observedRunningTime="2025-11-22 10:44:51.078022532 +0000 UTC m=+411.317467026" watchObservedRunningTime="2025-11-22 10:44:51.082128207 +0000 UTC m=+411.321572701" Nov 22 10:44:52 crc kubenswrapper[4772]: I1122 10:44:52.079925 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqg5j" event={"ID":"f6441995-690d-46c4-bd8c-56fa50f781cb","Type":"ContainerStarted","Data":"1315646752c9860f92d48652940d375e4b9a5a6eab44eddfe5044347ead97294"} Nov 22 10:44:52 crc kubenswrapper[4772]: I1122 10:44:52.083166 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d7hb" event={"ID":"bb106d85-8cbf-4ca1-ac98-496f521882c2","Type":"ContainerStarted","Data":"f59dad749a09a6fc57904d26c1ae59df24ae5ee6bbedb033517cab45f54b56ca"} Nov 22 10:44:53 crc kubenswrapper[4772]: I1122 10:44:53.091394 4772 generic.go:334] "Generic (PLEG): container finished" podID="bb106d85-8cbf-4ca1-ac98-496f521882c2" containerID="f59dad749a09a6fc57904d26c1ae59df24ae5ee6bbedb033517cab45f54b56ca" exitCode=0 Nov 22 10:44:53 crc kubenswrapper[4772]: I1122 10:44:53.091479 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d7hb" event={"ID":"bb106d85-8cbf-4ca1-ac98-496f521882c2","Type":"ContainerDied","Data":"f59dad749a09a6fc57904d26c1ae59df24ae5ee6bbedb033517cab45f54b56ca"} Nov 22 10:44:53 crc kubenswrapper[4772]: I1122 10:44:53.097558 4772 generic.go:334] "Generic (PLEG): container finished" podID="f6441995-690d-46c4-bd8c-56fa50f781cb" containerID="1315646752c9860f92d48652940d375e4b9a5a6eab44eddfe5044347ead97294" exitCode=0 Nov 22 10:44:53 crc kubenswrapper[4772]: I1122 10:44:53.097622 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqg5j" event={"ID":"f6441995-690d-46c4-bd8c-56fa50f781cb","Type":"ContainerDied","Data":"1315646752c9860f92d48652940d375e4b9a5a6eab44eddfe5044347ead97294"} Nov 22 10:44:54 crc kubenswrapper[4772]: I1122 10:44:54.939123 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ndfjs"] Nov 22 10:44:54 crc kubenswrapper[4772]: I1122 10:44:54.940340 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:54 crc kubenswrapper[4772]: I1122 10:44:54.954233 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ndfjs"] Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026335 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq229\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-kube-api-access-mq229\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026394 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-bound-sa-token\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026443 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/829bfd95-458f-40bc-a194-3d893c88f4c6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026475 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/829bfd95-458f-40bc-a194-3d893c88f4c6-registry-certificates\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026673 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026832 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/829bfd95-458f-40bc-a194-3d893c88f4c6-trusted-ca\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026900 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/829bfd95-458f-40bc-a194-3d893c88f4c6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.026991 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-registry-tls\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.085671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.111904 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqg5j" event={"ID":"f6441995-690d-46c4-bd8c-56fa50f781cb","Type":"ContainerStarted","Data":"0c248f327a17ec2a74078d8eae72935b605b9e4bbbc47afe2bb1d729cf8686cf"} Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.114307 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d7hb" event={"ID":"bb106d85-8cbf-4ca1-ac98-496f521882c2","Type":"ContainerStarted","Data":"fab7f665313dfffd996d33342615dd869bd03680de88ee96db6fa9156541f26c"} Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.128981 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/829bfd95-458f-40bc-a194-3d893c88f4c6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.129036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/829bfd95-458f-40bc-a194-3d893c88f4c6-registry-certificates\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.129104 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/829bfd95-458f-40bc-a194-3d893c88f4c6-trusted-ca\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.129138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/829bfd95-458f-40bc-a194-3d893c88f4c6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.129175 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-registry-tls\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.129200 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq229\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-kube-api-access-mq229\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.129225 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-bound-sa-token\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.130132 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/829bfd95-458f-40bc-a194-3d893c88f4c6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.130964 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/829bfd95-458f-40bc-a194-3d893c88f4c6-registry-certificates\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.131070 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/829bfd95-458f-40bc-a194-3d893c88f4c6-trusted-ca\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.131362 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqg5j" podStartSLOduration=3.665310817 podStartE2EDuration="6.131340433s" podCreationTimestamp="2025-11-22 10:44:49 +0000 UTC" firstStartedPulling="2025-11-22 10:44:51.047498198 +0000 UTC m=+411.286942692" lastFinishedPulling="2025-11-22 10:44:53.513527814 +0000 UTC m=+413.752972308" observedRunningTime="2025-11-22 10:44:55.130466471 +0000 UTC m=+415.369910965" watchObservedRunningTime="2025-11-22 10:44:55.131340433 +0000 UTC m=+415.370784927" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.136206 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-registry-tls\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.137679 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/829bfd95-458f-40bc-a194-3d893c88f4c6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.149916 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-bound-sa-token\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.153417 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq229\" (UniqueName: \"kubernetes.io/projected/829bfd95-458f-40bc-a194-3d893c88f4c6-kube-api-access-mq229\") pod \"image-registry-66df7c8f76-ndfjs\" (UID: \"829bfd95-458f-40bc-a194-3d893c88f4c6\") " pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.158422 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9d7hb" podStartSLOduration=3.7512016729999997 podStartE2EDuration="6.158392878s" podCreationTimestamp="2025-11-22 10:44:49 +0000 UTC" firstStartedPulling="2025-11-22 10:44:51.072066509 +0000 UTC m=+411.311511003" lastFinishedPulling="2025-11-22 10:44:53.479257724 +0000 UTC m=+413.718702208" observedRunningTime="2025-11-22 10:44:55.154841987 +0000 UTC m=+415.394286501" watchObservedRunningTime="2025-11-22 10:44:55.158392878 +0000 UTC m=+415.397837382" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.265228 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:55 crc kubenswrapper[4772]: I1122 10:44:55.483525 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ndfjs"] Nov 22 10:44:55 crc kubenswrapper[4772]: W1122 10:44:55.494617 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod829bfd95_458f_40bc_a194_3d893c88f4c6.slice/crio-aee923bc0b86ab5de9c87bcb4f3abf238d5b51d339dccba064b4c8bed691787d WatchSource:0}: Error finding container aee923bc0b86ab5de9c87bcb4f3abf238d5b51d339dccba064b4c8bed691787d: Status 404 returned error can't find the container with id aee923bc0b86ab5de9c87bcb4f3abf238d5b51d339dccba064b4c8bed691787d Nov 22 10:44:56 crc kubenswrapper[4772]: I1122 10:44:56.121616 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" event={"ID":"829bfd95-458f-40bc-a194-3d893c88f4c6","Type":"ContainerStarted","Data":"b882fbd038a8c2ca8fd0e655a08e6e009611bae7227e465e968808eaa38a27ee"} Nov 22 10:44:56 crc kubenswrapper[4772]: I1122 10:44:56.122369 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" event={"ID":"829bfd95-458f-40bc-a194-3d893c88f4c6","Type":"ContainerStarted","Data":"aee923bc0b86ab5de9c87bcb4f3abf238d5b51d339dccba064b4c8bed691787d"} Nov 22 10:44:56 crc kubenswrapper[4772]: I1122 10:44:56.122393 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:44:56 crc kubenswrapper[4772]: I1122 10:44:56.149238 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" podStartSLOduration=2.149211347 podStartE2EDuration="2.149211347s" podCreationTimestamp="2025-11-22 10:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:44:56.1446416 +0000 UTC m=+416.384086104" watchObservedRunningTime="2025-11-22 10:44:56.149211347 +0000 UTC m=+416.388655841" Nov 22 10:44:57 crc kubenswrapper[4772]: I1122 10:44:57.302172 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:57 crc kubenswrapper[4772]: I1122 10:44:57.303077 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:57 crc kubenswrapper[4772]: I1122 10:44:57.366247 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:57 crc kubenswrapper[4772]: I1122 10:44:57.498852 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:57 crc kubenswrapper[4772]: I1122 10:44:57.498920 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:57 crc kubenswrapper[4772]: I1122 10:44:57.547463 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:58 crc kubenswrapper[4772]: I1122 10:44:58.182739 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v2wk4" Nov 22 10:44:58 crc kubenswrapper[4772]: I1122 10:44:58.190147 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bws7w" Nov 22 10:44:59 crc kubenswrapper[4772]: I1122 10:44:59.753111 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:59 crc kubenswrapper[4772]: I1122 10:44:59.753480 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:44:59 crc kubenswrapper[4772]: I1122 10:44:59.974824 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:44:59 crc kubenswrapper[4772]: I1122 10:44:59.974894 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.019517 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.141305 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f"] Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.142390 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.144802 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.146685 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.155921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f"] Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.199064 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqg5j" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.205993 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsdhn\" (UniqueName: \"kubernetes.io/projected/d85e5a1e-edd8-451b-aacb-892f34171757-kube-api-access-rsdhn\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.206390 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d85e5a1e-edd8-451b-aacb-892f34171757-secret-volume\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.206613 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85e5a1e-edd8-451b-aacb-892f34171757-config-volume\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.308059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85e5a1e-edd8-451b-aacb-892f34171757-config-volume\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.308160 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsdhn\" (UniqueName: \"kubernetes.io/projected/d85e5a1e-edd8-451b-aacb-892f34171757-kube-api-access-rsdhn\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.308198 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d85e5a1e-edd8-451b-aacb-892f34171757-secret-volume\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.309319 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85e5a1e-edd8-451b-aacb-892f34171757-config-volume\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.316869 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d85e5a1e-edd8-451b-aacb-892f34171757-secret-volume\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.332639 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsdhn\" (UniqueName: \"kubernetes.io/projected/d85e5a1e-edd8-451b-aacb-892f34171757-kube-api-access-rsdhn\") pod \"collect-profiles-29396805-ltj2f\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.471371 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.802484 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9d7hb" podUID="bb106d85-8cbf-4ca1-ac98-496f521882c2" containerName="registry-server" probeResult="failure" output=< Nov 22 10:45:00 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 10:45:00 crc kubenswrapper[4772]: > Nov 22 10:45:00 crc kubenswrapper[4772]: I1122 10:45:00.950346 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f"] Nov 22 10:45:00 crc kubenswrapper[4772]: W1122 10:45:00.963415 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd85e5a1e_edd8_451b_aacb_892f34171757.slice/crio-5023029c9c0cc54b3290633a1559ff7445af5e3833cfd0aa22c48dec31ef2128 WatchSource:0}: Error finding container 5023029c9c0cc54b3290633a1559ff7445af5e3833cfd0aa22c48dec31ef2128: Status 404 returned error can't find the container with id 5023029c9c0cc54b3290633a1559ff7445af5e3833cfd0aa22c48dec31ef2128 Nov 22 10:45:01 crc kubenswrapper[4772]: I1122 10:45:01.155256 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" event={"ID":"d85e5a1e-edd8-451b-aacb-892f34171757","Type":"ContainerStarted","Data":"5023029c9c0cc54b3290633a1559ff7445af5e3833cfd0aa22c48dec31ef2128"} Nov 22 10:45:01 crc kubenswrapper[4772]: I1122 10:45:01.533357 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:45:01 crc kubenswrapper[4772]: I1122 10:45:01.533828 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:45:02 crc kubenswrapper[4772]: I1122 10:45:02.162557 4772 generic.go:334] "Generic (PLEG): container finished" podID="d85e5a1e-edd8-451b-aacb-892f34171757" containerID="838b7be4cc69cdf5bfeea00cbb953de82a2779858473158f652099c881ff802b" exitCode=0 Nov 22 10:45:02 crc kubenswrapper[4772]: I1122 10:45:02.162628 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" event={"ID":"d85e5a1e-edd8-451b-aacb-892f34171757","Type":"ContainerDied","Data":"838b7be4cc69cdf5bfeea00cbb953de82a2779858473158f652099c881ff802b"} Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.385447 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.454587 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d85e5a1e-edd8-451b-aacb-892f34171757-secret-volume\") pod \"d85e5a1e-edd8-451b-aacb-892f34171757\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.454696 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85e5a1e-edd8-451b-aacb-892f34171757-config-volume\") pod \"d85e5a1e-edd8-451b-aacb-892f34171757\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.454763 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsdhn\" (UniqueName: \"kubernetes.io/projected/d85e5a1e-edd8-451b-aacb-892f34171757-kube-api-access-rsdhn\") pod \"d85e5a1e-edd8-451b-aacb-892f34171757\" (UID: \"d85e5a1e-edd8-451b-aacb-892f34171757\") " Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.455625 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85e5a1e-edd8-451b-aacb-892f34171757-config-volume" (OuterVolumeSpecName: "config-volume") pod "d85e5a1e-edd8-451b-aacb-892f34171757" (UID: "d85e5a1e-edd8-451b-aacb-892f34171757"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.462638 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85e5a1e-edd8-451b-aacb-892f34171757-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d85e5a1e-edd8-451b-aacb-892f34171757" (UID: "d85e5a1e-edd8-451b-aacb-892f34171757"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.462684 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85e5a1e-edd8-451b-aacb-892f34171757-kube-api-access-rsdhn" (OuterVolumeSpecName: "kube-api-access-rsdhn") pod "d85e5a1e-edd8-451b-aacb-892f34171757" (UID: "d85e5a1e-edd8-451b-aacb-892f34171757"). InnerVolumeSpecName "kube-api-access-rsdhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.557605 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85e5a1e-edd8-451b-aacb-892f34171757-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.557650 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsdhn\" (UniqueName: \"kubernetes.io/projected/d85e5a1e-edd8-451b-aacb-892f34171757-kube-api-access-rsdhn\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:03 crc kubenswrapper[4772]: I1122 10:45:03.557664 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d85e5a1e-edd8-451b-aacb-892f34171757-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:04 crc kubenswrapper[4772]: I1122 10:45:04.174876 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" event={"ID":"d85e5a1e-edd8-451b-aacb-892f34171757","Type":"ContainerDied","Data":"5023029c9c0cc54b3290633a1559ff7445af5e3833cfd0aa22c48dec31ef2128"} Nov 22 10:45:04 crc kubenswrapper[4772]: I1122 10:45:04.174936 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5023029c9c0cc54b3290633a1559ff7445af5e3833cfd0aa22c48dec31ef2128" Nov 22 10:45:04 crc kubenswrapper[4772]: I1122 10:45:04.174965 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f" Nov 22 10:45:09 crc kubenswrapper[4772]: I1122 10:45:09.824728 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:45:09 crc kubenswrapper[4772]: I1122 10:45:09.874641 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9d7hb" Nov 22 10:45:15 crc kubenswrapper[4772]: I1122 10:45:15.270356 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ndfjs" Nov 22 10:45:15 crc kubenswrapper[4772]: I1122 10:45:15.321354 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hlvq7"] Nov 22 10:45:31 crc kubenswrapper[4772]: I1122 10:45:31.533472 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:45:31 crc kubenswrapper[4772]: I1122 10:45:31.536444 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:45:31 crc kubenswrapper[4772]: I1122 10:45:31.536676 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:45:31 crc kubenswrapper[4772]: I1122 10:45:31.537770 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab514550d792c628ef77edacd4d44003f6f64f78cadfba4aca08099f82d843e9"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:45:31 crc kubenswrapper[4772]: I1122 10:45:31.537977 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://ab514550d792c628ef77edacd4d44003f6f64f78cadfba4aca08099f82d843e9" gracePeriod=600 Nov 22 10:45:32 crc kubenswrapper[4772]: I1122 10:45:32.352612 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="ab514550d792c628ef77edacd4d44003f6f64f78cadfba4aca08099f82d843e9" exitCode=0 Nov 22 10:45:32 crc kubenswrapper[4772]: I1122 10:45:32.352686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"ab514550d792c628ef77edacd4d44003f6f64f78cadfba4aca08099f82d843e9"} Nov 22 10:45:32 crc kubenswrapper[4772]: I1122 10:45:32.353523 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"d9b527511f0cba8ff0dded4c068d1e4f98f8ff79902eedf642cdbe0763702c86"} Nov 22 10:45:32 crc kubenswrapper[4772]: I1122 10:45:32.353584 4772 scope.go:117] "RemoveContainer" containerID="4247d5ab0095ab6c7315fdfcde34f8407d870723059e61c3417c20f80dc06ebd" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.361889 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" podUID="723118f2-f91b-4ca0-a6f9-4deaee014ef0" containerName="registry" containerID="cri-o://75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4" gracePeriod=30 Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.694877 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.842772 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.842866 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-tls\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.842910 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-trusted-ca\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.842940 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/723118f2-f91b-4ca0-a6f9-4deaee014ef0-ca-trust-extracted\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.842970 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/723118f2-f91b-4ca0-a6f9-4deaee014ef0-installation-pull-secrets\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.842999 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-certificates\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.843023 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-bound-sa-token\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.843049 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l275\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-kube-api-access-9l275\") pod \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\" (UID: \"723118f2-f91b-4ca0-a6f9-4deaee014ef0\") " Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.844802 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.844866 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.850909 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-kube-api-access-9l275" (OuterVolumeSpecName: "kube-api-access-9l275") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "kube-api-access-9l275". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.851583 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.852064 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.853276 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723118f2-f91b-4ca0-a6f9-4deaee014ef0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.856767 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.863501 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/723118f2-f91b-4ca0-a6f9-4deaee014ef0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "723118f2-f91b-4ca0-a6f9-4deaee014ef0" (UID: "723118f2-f91b-4ca0-a6f9-4deaee014ef0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.944286 4772 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.944336 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.944354 4772 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/723118f2-f91b-4ca0-a6f9-4deaee014ef0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.944368 4772 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/723118f2-f91b-4ca0-a6f9-4deaee014ef0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.944382 4772 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/723118f2-f91b-4ca0-a6f9-4deaee014ef0-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.944394 4772 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:40 crc kubenswrapper[4772]: I1122 10:45:40.944405 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l275\" (UniqueName: \"kubernetes.io/projected/723118f2-f91b-4ca0-a6f9-4deaee014ef0-kube-api-access-9l275\") on node \"crc\" DevicePath \"\"" Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.417899 4772 generic.go:334] "Generic (PLEG): container finished" podID="723118f2-f91b-4ca0-a6f9-4deaee014ef0" containerID="75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4" exitCode=0 Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.417973 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.422223 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" event={"ID":"723118f2-f91b-4ca0-a6f9-4deaee014ef0","Type":"ContainerDied","Data":"75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4"} Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.422305 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hlvq7" event={"ID":"723118f2-f91b-4ca0-a6f9-4deaee014ef0","Type":"ContainerDied","Data":"09fb9f4d414bfcc55de4f91a8c4cbd2df672d4db29dece59d1f0574a0b00933d"} Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.422342 4772 scope.go:117] "RemoveContainer" containerID="75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4" Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.448428 4772 scope.go:117] "RemoveContainer" containerID="75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4" Nov 22 10:45:41 crc kubenswrapper[4772]: E1122 10:45:41.449199 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4\": container with ID starting with 75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4 not found: ID does not exist" containerID="75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4" Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.449345 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4"} err="failed to get container status \"75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4\": rpc error: code = NotFound desc = could not find container \"75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4\": container with ID starting with 75f8f3b23ffe54f550d9d74371f7de9f351997528b3b58064bad4c5bcd5efcb4 not found: ID does not exist" Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.471623 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hlvq7"] Nov 22 10:45:41 crc kubenswrapper[4772]: I1122 10:45:41.474566 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hlvq7"] Nov 22 10:45:43 crc kubenswrapper[4772]: I1122 10:45:43.421177 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="723118f2-f91b-4ca0-a6f9-4deaee014ef0" path="/var/lib/kubelet/pods/723118f2-f91b-4ca0-a6f9-4deaee014ef0/volumes" Nov 22 10:47:31 crc kubenswrapper[4772]: I1122 10:47:31.533413 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:47:31 crc kubenswrapper[4772]: I1122 10:47:31.534928 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:48:01 crc kubenswrapper[4772]: I1122 10:48:01.533412 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:48:01 crc kubenswrapper[4772]: I1122 10:48:01.533958 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:48:31 crc kubenswrapper[4772]: I1122 10:48:31.533331 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:48:31 crc kubenswrapper[4772]: I1122 10:48:31.534103 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:48:31 crc kubenswrapper[4772]: I1122 10:48:31.534194 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:48:31 crc kubenswrapper[4772]: I1122 10:48:31.535177 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9b527511f0cba8ff0dded4c068d1e4f98f8ff79902eedf642cdbe0763702c86"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:48:31 crc kubenswrapper[4772]: I1122 10:48:31.535287 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://d9b527511f0cba8ff0dded4c068d1e4f98f8ff79902eedf642cdbe0763702c86" gracePeriod=600 Nov 22 10:48:32 crc kubenswrapper[4772]: I1122 10:48:32.556997 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="d9b527511f0cba8ff0dded4c068d1e4f98f8ff79902eedf642cdbe0763702c86" exitCode=0 Nov 22 10:48:32 crc kubenswrapper[4772]: I1122 10:48:32.557808 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"d9b527511f0cba8ff0dded4c068d1e4f98f8ff79902eedf642cdbe0763702c86"} Nov 22 10:48:32 crc kubenswrapper[4772]: I1122 10:48:32.557852 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"1bc014ba5a352c64bb1f584ebb6c9325985805800f901b3f27190486054a5e50"} Nov 22 10:48:32 crc kubenswrapper[4772]: I1122 10:48:32.557886 4772 scope.go:117] "RemoveContainer" containerID="ab514550d792c628ef77edacd4d44003f6f64f78cadfba4aca08099f82d843e9" Nov 22 10:50:31 crc kubenswrapper[4772]: I1122 10:50:31.532963 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:50:31 crc kubenswrapper[4772]: I1122 10:50:31.533536 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.466101 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vvs55"] Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.468765 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerName="controller-manager" containerID="cri-o://bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca" gracePeriod=30 Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.596305 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2"] Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.596531 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" podUID="7e6495dc-3c26-45e6-af62-a4957488ae51" containerName="route-controller-manager" containerID="cri-o://0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258" gracePeriod=30 Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.848844 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.905507 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.910175 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-proxy-ca-bundles\") pod \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.910361 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62vrs\" (UniqueName: \"kubernetes.io/projected/bea9575a-d7c4-4aaa-bc01-eaee90317eea-kube-api-access-62vrs\") pod \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.910425 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bea9575a-d7c4-4aaa-bc01-eaee90317eea-serving-cert\") pod \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.910501 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-config\") pod \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.910522 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-client-ca\") pod \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\" (UID: \"bea9575a-d7c4-4aaa-bc01-eaee90317eea\") " Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.911421 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-client-ca" (OuterVolumeSpecName: "client-ca") pod "bea9575a-d7c4-4aaa-bc01-eaee90317eea" (UID: "bea9575a-d7c4-4aaa-bc01-eaee90317eea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.911724 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bea9575a-d7c4-4aaa-bc01-eaee90317eea" (UID: "bea9575a-d7c4-4aaa-bc01-eaee90317eea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.911910 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.911926 4772 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.912130 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-config" (OuterVolumeSpecName: "config") pod "bea9575a-d7c4-4aaa-bc01-eaee90317eea" (UID: "bea9575a-d7c4-4aaa-bc01-eaee90317eea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.918205 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9575a-d7c4-4aaa-bc01-eaee90317eea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bea9575a-d7c4-4aaa-bc01-eaee90317eea" (UID: "bea9575a-d7c4-4aaa-bc01-eaee90317eea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:50:49 crc kubenswrapper[4772]: I1122 10:50:49.918561 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea9575a-d7c4-4aaa-bc01-eaee90317eea-kube-api-access-62vrs" (OuterVolumeSpecName: "kube-api-access-62vrs") pod "bea9575a-d7c4-4aaa-bc01-eaee90317eea" (UID: "bea9575a-d7c4-4aaa-bc01-eaee90317eea"). InnerVolumeSpecName "kube-api-access-62vrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.012651 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2r7p\" (UniqueName: \"kubernetes.io/projected/7e6495dc-3c26-45e6-af62-a4957488ae51-kube-api-access-f2r7p\") pod \"7e6495dc-3c26-45e6-af62-a4957488ae51\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.012710 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-config\") pod \"7e6495dc-3c26-45e6-af62-a4957488ae51\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.012807 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-client-ca\") pod \"7e6495dc-3c26-45e6-af62-a4957488ae51\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.012837 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6495dc-3c26-45e6-af62-a4957488ae51-serving-cert\") pod \"7e6495dc-3c26-45e6-af62-a4957488ae51\" (UID: \"7e6495dc-3c26-45e6-af62-a4957488ae51\") " Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.013028 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62vrs\" (UniqueName: \"kubernetes.io/projected/bea9575a-d7c4-4aaa-bc01-eaee90317eea-kube-api-access-62vrs\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.013039 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bea9575a-d7c4-4aaa-bc01-eaee90317eea-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.013077 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea9575a-d7c4-4aaa-bc01-eaee90317eea-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.013533 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e6495dc-3c26-45e6-af62-a4957488ae51" (UID: "7e6495dc-3c26-45e6-af62-a4957488ae51"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.013842 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-config" (OuterVolumeSpecName: "config") pod "7e6495dc-3c26-45e6-af62-a4957488ae51" (UID: "7e6495dc-3c26-45e6-af62-a4957488ae51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.015905 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6495dc-3c26-45e6-af62-a4957488ae51-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e6495dc-3c26-45e6-af62-a4957488ae51" (UID: "7e6495dc-3c26-45e6-af62-a4957488ae51"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.015954 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6495dc-3c26-45e6-af62-a4957488ae51-kube-api-access-f2r7p" (OuterVolumeSpecName: "kube-api-access-f2r7p") pod "7e6495dc-3c26-45e6-af62-a4957488ae51" (UID: "7e6495dc-3c26-45e6-af62-a4957488ae51"). InnerVolumeSpecName "kube-api-access-f2r7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.114252 4772 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.114283 4772 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e6495dc-3c26-45e6-af62-a4957488ae51-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.114294 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2r7p\" (UniqueName: \"kubernetes.io/projected/7e6495dc-3c26-45e6-af62-a4957488ae51-kube-api-access-f2r7p\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.114305 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e6495dc-3c26-45e6-af62-a4957488ae51-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.355673 4772 generic.go:334] "Generic (PLEG): container finished" podID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerID="bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca" exitCode=0 Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.355728 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" event={"ID":"bea9575a-d7c4-4aaa-bc01-eaee90317eea","Type":"ContainerDied","Data":"bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca"} Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.355775 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" event={"ID":"bea9575a-d7c4-4aaa-bc01-eaee90317eea","Type":"ContainerDied","Data":"60ab0be0c4ef51aa2b527563e794c436f362f3915866568b2adc9ed51af5b04c"} Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.355793 4772 scope.go:117] "RemoveContainer" containerID="bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.355747 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vvs55" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.358504 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e6495dc-3c26-45e6-af62-a4957488ae51" containerID="0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258" exitCode=0 Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.358544 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" event={"ID":"7e6495dc-3c26-45e6-af62-a4957488ae51","Type":"ContainerDied","Data":"0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258"} Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.358568 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" event={"ID":"7e6495dc-3c26-45e6-af62-a4957488ae51","Type":"ContainerDied","Data":"1d66a81e2bff1096b1dcd00424f8535f03c982fe8bf6dbb764cab6cc21081df0"} Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.358630 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.368448 4772 scope.go:117] "RemoveContainer" containerID="bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca" Nov 22 10:50:50 crc kubenswrapper[4772]: E1122 10:50:50.368833 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca\": container with ID starting with bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca not found: ID does not exist" containerID="bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.368886 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca"} err="failed to get container status \"bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca\": rpc error: code = NotFound desc = could not find container \"bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca\": container with ID starting with bf10b39a5b22424399b61790a81361c60871bb9cd0817a4652a8eac6731b48ca not found: ID does not exist" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.368918 4772 scope.go:117] "RemoveContainer" containerID="0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.385266 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2"] Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.386286 4772 scope.go:117] "RemoveContainer" containerID="0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258" Nov 22 10:50:50 crc kubenswrapper[4772]: E1122 10:50:50.387466 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258\": container with ID starting with 0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258 not found: ID does not exist" containerID="0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.387512 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258"} err="failed to get container status \"0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258\": rpc error: code = NotFound desc = could not find container \"0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258\": container with ID starting with 0ce4f4919f780b76ee07ac819de42455e0db8917444c47b2a45ea6b991e36258 not found: ID does not exist" Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.387596 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xmqs2"] Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.394807 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vvs55"] Nov 22 10:50:50 crc kubenswrapper[4772]: I1122 10:50:50.397556 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vvs55"] Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415219 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48"] Nov 22 10:50:51 crc kubenswrapper[4772]: E1122 10:50:51.415715 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6495dc-3c26-45e6-af62-a4957488ae51" containerName="route-controller-manager" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415726 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6495dc-3c26-45e6-af62-a4957488ae51" containerName="route-controller-manager" Nov 22 10:50:51 crc kubenswrapper[4772]: E1122 10:50:51.415737 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85e5a1e-edd8-451b-aacb-892f34171757" containerName="collect-profiles" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415743 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85e5a1e-edd8-451b-aacb-892f34171757" containerName="collect-profiles" Nov 22 10:50:51 crc kubenswrapper[4772]: E1122 10:50:51.415752 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerName="controller-manager" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415758 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerName="controller-manager" Nov 22 10:50:51 crc kubenswrapper[4772]: E1122 10:50:51.415770 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723118f2-f91b-4ca0-a6f9-4deaee014ef0" containerName="registry" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415775 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="723118f2-f91b-4ca0-a6f9-4deaee014ef0" containerName="registry" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415878 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85e5a1e-edd8-451b-aacb-892f34171757" containerName="collect-profiles" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415891 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="723118f2-f91b-4ca0-a6f9-4deaee014ef0" containerName="registry" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415900 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" containerName="controller-manager" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.415909 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6495dc-3c26-45e6-af62-a4957488ae51" containerName="route-controller-manager" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.417091 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.419724 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.419843 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.420342 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.420536 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.421521 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.422388 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.423222 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6495dc-3c26-45e6-af62-a4957488ae51" path="/var/lib/kubelet/pods/7e6495dc-3c26-45e6-af62-a4957488ae51/volumes" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.423717 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea9575a-d7c4-4aaa-bc01-eaee90317eea" path="/var/lib/kubelet/pods/bea9575a-d7c4-4aaa-bc01-eaee90317eea/volumes" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.424125 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79c45cfd86-pqkg8"] Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.424704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.425895 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.426064 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.426316 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.426518 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.426632 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.428302 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.430618 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48"] Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.434854 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.441007 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79c45cfd86-pqkg8"] Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.529196 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db57f\" (UniqueName: \"kubernetes.io/projected/314fcf05-1a93-46cb-b27b-e69f03c7206d-kube-api-access-db57f\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.529245 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3cf304-adba-40a9-ba99-42efa953f7ef-serving-cert\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.529306 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-client-ca\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.529330 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/314fcf05-1a93-46cb-b27b-e69f03c7206d-client-ca\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.529351 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-proxy-ca-bundles\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.529893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/314fcf05-1a93-46cb-b27b-e69f03c7206d-config\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.529954 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2mw\" (UniqueName: \"kubernetes.io/projected/8b3cf304-adba-40a9-ba99-42efa953f7ef-kube-api-access-dm2mw\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.530022 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-config\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.530146 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/314fcf05-1a93-46cb-b27b-e69f03c7206d-serving-cert\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631564 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db57f\" (UniqueName: \"kubernetes.io/projected/314fcf05-1a93-46cb-b27b-e69f03c7206d-kube-api-access-db57f\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631610 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3cf304-adba-40a9-ba99-42efa953f7ef-serving-cert\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-client-ca\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631671 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/314fcf05-1a93-46cb-b27b-e69f03c7206d-client-ca\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631694 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-proxy-ca-bundles\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631755 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/314fcf05-1a93-46cb-b27b-e69f03c7206d-config\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631784 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2mw\" (UniqueName: \"kubernetes.io/projected/8b3cf304-adba-40a9-ba99-42efa953f7ef-kube-api-access-dm2mw\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631810 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-config\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.631838 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/314fcf05-1a93-46cb-b27b-e69f03c7206d-serving-cert\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.632924 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/314fcf05-1a93-46cb-b27b-e69f03c7206d-client-ca\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.633200 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/314fcf05-1a93-46cb-b27b-e69f03c7206d-config\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.633613 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-client-ca\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.634106 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-proxy-ca-bundles\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.634351 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cf304-adba-40a9-ba99-42efa953f7ef-config\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.635892 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3cf304-adba-40a9-ba99-42efa953f7ef-serving-cert\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.640758 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/314fcf05-1a93-46cb-b27b-e69f03c7206d-serving-cert\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.649746 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db57f\" (UniqueName: \"kubernetes.io/projected/314fcf05-1a93-46cb-b27b-e69f03c7206d-kube-api-access-db57f\") pod \"route-controller-manager-5d6885d648-btt48\" (UID: \"314fcf05-1a93-46cb-b27b-e69f03c7206d\") " pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.650368 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2mw\" (UniqueName: \"kubernetes.io/projected/8b3cf304-adba-40a9-ba99-42efa953f7ef-kube-api-access-dm2mw\") pod \"controller-manager-79c45cfd86-pqkg8\" (UID: \"8b3cf304-adba-40a9-ba99-42efa953f7ef\") " pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.735930 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.749371 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:51 crc kubenswrapper[4772]: I1122 10:50:51.958544 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79c45cfd86-pqkg8"] Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.114505 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48"] Nov 22 10:50:52 crc kubenswrapper[4772]: W1122 10:50:52.120130 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod314fcf05_1a93_46cb_b27b_e69f03c7206d.slice/crio-750963c9443fc03216560dc2be2af45717fb66075c081fc577060e8bad50857c WatchSource:0}: Error finding container 750963c9443fc03216560dc2be2af45717fb66075c081fc577060e8bad50857c: Status 404 returned error can't find the container with id 750963c9443fc03216560dc2be2af45717fb66075c081fc577060e8bad50857c Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.370868 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" event={"ID":"8b3cf304-adba-40a9-ba99-42efa953f7ef","Type":"ContainerStarted","Data":"2dc596627d86e5fcf0514191e920d6975eb1695f231ca94fd71e05ef5b322b7c"} Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.371119 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" event={"ID":"8b3cf304-adba-40a9-ba99-42efa953f7ef","Type":"ContainerStarted","Data":"dea1e89cf46cb0fec8ef53a06539df325130b4ec00de09f18dfbf1883434cca1"} Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.371136 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.373932 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" event={"ID":"314fcf05-1a93-46cb-b27b-e69f03c7206d","Type":"ContainerStarted","Data":"f2954ce57a012ba4c97492ca5fd9bc5b264f4a386f8a3d3e63319533903b7005"} Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.373954 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" event={"ID":"314fcf05-1a93-46cb-b27b-e69f03c7206d","Type":"ContainerStarted","Data":"750963c9443fc03216560dc2be2af45717fb66075c081fc577060e8bad50857c"} Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.374153 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.375691 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.387765 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79c45cfd86-pqkg8" podStartSLOduration=3.387747934 podStartE2EDuration="3.387747934s" podCreationTimestamp="2025-11-22 10:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:50:52.386866732 +0000 UTC m=+772.626311226" watchObservedRunningTime="2025-11-22 10:50:52.387747934 +0000 UTC m=+772.627192428" Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.425001 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" podStartSLOduration=3.424985906 podStartE2EDuration="3.424985906s" podCreationTimestamp="2025-11-22 10:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:50:52.423882888 +0000 UTC m=+772.663327382" watchObservedRunningTime="2025-11-22 10:50:52.424985906 +0000 UTC m=+772.664430400" Nov 22 10:50:52 crc kubenswrapper[4772]: I1122 10:50:52.576282 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d6885d648-btt48" Nov 22 10:50:55 crc kubenswrapper[4772]: I1122 10:50:55.910593 4772 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 10:51:01 crc kubenswrapper[4772]: I1122 10:51:01.532580 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:51:01 crc kubenswrapper[4772]: I1122 10:51:01.532851 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:51:31 crc kubenswrapper[4772]: I1122 10:51:31.532657 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:51:31 crc kubenswrapper[4772]: I1122 10:51:31.533169 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:51:31 crc kubenswrapper[4772]: I1122 10:51:31.533217 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:51:31 crc kubenswrapper[4772]: I1122 10:51:31.533730 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc014ba5a352c64bb1f584ebb6c9325985805800f901b3f27190486054a5e50"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:51:31 crc kubenswrapper[4772]: I1122 10:51:31.533777 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://1bc014ba5a352c64bb1f584ebb6c9325985805800f901b3f27190486054a5e50" gracePeriod=600 Nov 22 10:51:32 crc kubenswrapper[4772]: I1122 10:51:32.583477 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="1bc014ba5a352c64bb1f584ebb6c9325985805800f901b3f27190486054a5e50" exitCode=0 Nov 22 10:51:32 crc kubenswrapper[4772]: I1122 10:51:32.583668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"1bc014ba5a352c64bb1f584ebb6c9325985805800f901b3f27190486054a5e50"} Nov 22 10:51:32 crc kubenswrapper[4772]: I1122 10:51:32.583865 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"6ed5ce78086a642e7415af1fd3d7071bae3e10a61431b8613ee77406e828d8f3"} Nov 22 10:51:32 crc kubenswrapper[4772]: I1122 10:51:32.583895 4772 scope.go:117] "RemoveContainer" containerID="d9b527511f0cba8ff0dded4c068d1e4f98f8ff79902eedf642cdbe0763702c86" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.296763 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mfm49"] Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.297632 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-controller" containerID="cri-o://bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.297708 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.297743 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-node" containerID="cri-o://405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.297750 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="nbdb" containerID="cri-o://7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.297683 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="sbdb" containerID="cri-o://4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.297785 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-acl-logging" containerID="cri-o://da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.297835 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="northd" containerID="cri-o://34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.338864 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" containerID="cri-o://b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" gracePeriod=30 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.586031 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/3.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.587631 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovn-acl-logging/0.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.587990 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovn-controller/0.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.588330 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.632835 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prstc"] Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633086 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kubecfg-setup" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633107 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kubecfg-setup" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633120 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633129 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633193 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-node" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633203 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-node" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633214 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633225 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633236 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633267 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633277 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="sbdb" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633286 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="sbdb" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633296 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633304 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633315 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633345 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633357 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="northd" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633364 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="northd" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633374 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-acl-logging" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633381 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-acl-logging" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633392 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633399 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633469 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="nbdb" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633475 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="nbdb" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633639 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633653 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633660 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="sbdb" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633668 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-node" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633699 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="nbdb" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633710 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633719 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="northd" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633728 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633736 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-acl-logging" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633744 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovn-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.633893 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.633904 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.634081 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.634102 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerName="ovnkube-controller" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.636670 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708368 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-netd\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708424 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-systemd-units\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708443 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-systemd\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708459 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-log-socket\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708476 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-var-lib-openvswitch\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708509 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkcfw\" (UniqueName: \"kubernetes.io/projected/fd84e05e-cfd6-46d5-bd23-30689addcd8b-kube-api-access-nkcfw\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708531 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708550 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-env-overrides\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708565 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-config\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708604 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-node-log\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708623 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-ovn\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708642 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovn-node-metrics-cert\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708660 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-kubelet\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708687 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-script-lib\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708704 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-slash\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708721 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-bin\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708737 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-ovn-kubernetes\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708757 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-netns\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708770 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-openvswitch\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708783 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-etc-openvswitch\") pod \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\" (UID: \"fd84e05e-cfd6-46d5-bd23-30689addcd8b\") " Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.708988 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709014 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-slash" (OuterVolumeSpecName: "host-slash") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709023 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709041 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709058 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709073 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709089 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709081 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709106 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709126 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709144 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709144 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709582 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-log-socket" (OuterVolumeSpecName: "log-socket") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709896 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.709905 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.710072 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.710189 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-node-log" (OuterVolumeSpecName: "node-log") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.714226 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.714609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd84e05e-cfd6-46d5-bd23-30689addcd8b-kube-api-access-nkcfw" (OuterVolumeSpecName: "kube-api-access-nkcfw") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "kube-api-access-nkcfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.721672 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovnkube-controller/3.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.725642 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fd84e05e-cfd6-46d5-bd23-30689addcd8b" (UID: "fd84e05e-cfd6-46d5-bd23-30689addcd8b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.726287 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovn-acl-logging/0.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.726780 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mfm49_fd84e05e-cfd6-46d5-bd23-30689addcd8b/ovn-controller/0.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727223 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" exitCode=0 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727245 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" exitCode=0 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727253 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" exitCode=0 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727260 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" exitCode=0 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727266 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" exitCode=0 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727273 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" exitCode=0 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727280 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" exitCode=143 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727289 4772 generic.go:334] "Generic (PLEG): container finished" podID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" containerID="bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" exitCode=143 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727324 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727375 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727387 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727398 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727407 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727418 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727433 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727446 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727452 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727458 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727463 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727468 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727475 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727408 4772 scope.go:117] "RemoveContainer" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727480 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727570 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727599 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727622 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727627 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727633 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727639 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727644 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727650 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727655 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727661 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727666 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727672 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727679 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727687 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727694 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727700 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727705 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727710 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727715 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727720 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727726 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727731 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727736 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727742 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" event={"ID":"fd84e05e-cfd6-46d5-bd23-30689addcd8b","Type":"ContainerDied","Data":"7ecacca8dfa050f7f5ce4ca482cc4296e7f468f52bf670580b43811abdaada36"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727750 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727757 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727762 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727767 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727772 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727777 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727782 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727787 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727792 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.727797 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.728137 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mfm49" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.730329 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/2.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.731298 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/1.log" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.731334 4772 generic.go:334] "Generic (PLEG): container finished" podID="d73fd58d-561a-4b16-9f9d-49ae966edb24" containerID="dbaa283849426ce5e4b86e9417fa7dfe167d36ad15ac0db6dec5765c3414bc11" exitCode=2 Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.731353 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerDied","Data":"dbaa283849426ce5e4b86e9417fa7dfe167d36ad15ac0db6dec5765c3414bc11"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.731367 4772 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c"} Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.731764 4772 scope.go:117] "RemoveContainer" containerID="dbaa283849426ce5e4b86e9417fa7dfe167d36ad15ac0db6dec5765c3414bc11" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.770660 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mfm49"] Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.772152 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.773588 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mfm49"] Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.787194 4772 scope.go:117] "RemoveContainer" containerID="4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.803017 4772 scope.go:117] "RemoveContainer" containerID="7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810484 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-slash\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810520 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-log-socket\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810547 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-env-overrides\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810565 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810695 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-systemd\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810776 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-var-lib-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-ovnkube-script-lib\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810898 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-ovn\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810927 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8912075a-c61b-4fa3-ac39-4782b79289f5-ovn-node-metrics-cert\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.810989 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811018 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-kubelet\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-systemd-units\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811085 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76kv\" (UniqueName: \"kubernetes.io/projected/8912075a-c61b-4fa3-ac39-4782b79289f5-kube-api-access-k76kv\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811110 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-etc-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811125 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-ovnkube-config\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811141 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-cni-bin\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811159 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-cni-netd\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811177 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811191 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-node-log\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811210 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-run-netns\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811251 4772 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811262 4772 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811271 4772 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811279 4772 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-log-socket\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811302 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkcfw\" (UniqueName: \"kubernetes.io/projected/fd84e05e-cfd6-46d5-bd23-30689addcd8b-kube-api-access-nkcfw\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811313 4772 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811321 4772 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811330 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811338 4772 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-node-log\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811345 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811354 4772 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811364 4772 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811372 4772 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fd84e05e-cfd6-46d5-bd23-30689addcd8b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811380 4772 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-slash\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811388 4772 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811396 4772 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811404 4772 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811414 4772 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811430 4772 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.811438 4772 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd84e05e-cfd6-46d5-bd23-30689addcd8b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.833204 4772 scope.go:117] "RemoveContainer" containerID="34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.847378 4772 scope.go:117] "RemoveContainer" containerID="3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.860158 4772 scope.go:117] "RemoveContainer" containerID="405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.874872 4772 scope.go:117] "RemoveContainer" containerID="da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.891713 4772 scope.go:117] "RemoveContainer" containerID="bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.903290 4772 scope.go:117] "RemoveContainer" containerID="560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912208 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-slash\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912242 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-log-socket\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912267 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-env-overrides\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912342 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912355 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-log-socket\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-slash\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912401 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-systemd\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912366 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-systemd\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912503 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912540 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-var-lib-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912573 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-ovnkube-script-lib\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912590 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-ovn\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912609 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-var-lib-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912613 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8912075a-c61b-4fa3-ac39-4782b79289f5-ovn-node-metrics-cert\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912647 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912668 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-kubelet\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912686 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-systemd-units\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912700 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76kv\" (UniqueName: \"kubernetes.io/projected/8912075a-c61b-4fa3-ac39-4782b79289f5-kube-api-access-k76kv\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-etc-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912752 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-cni-bin\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912765 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-ovnkube-config\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912782 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-cni-netd\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912802 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912819 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-node-log\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912838 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-run-netns\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912887 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-run-netns\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912909 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912930 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-kubelet\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.912951 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-systemd-units\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913008 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-run-ovn\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913073 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-cni-bin\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913161 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-env-overrides\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913301 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-etc-openvswitch\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913346 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-node-log\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913382 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8912075a-c61b-4fa3-ac39-4782b79289f5-host-cni-netd\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913699 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-ovnkube-script-lib\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.913751 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8912075a-c61b-4fa3-ac39-4782b79289f5-ovnkube-config\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.915421 4772 scope.go:117] "RemoveContainer" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.920152 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": container with ID starting with b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16 not found: ID does not exist" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.920189 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} err="failed to get container status \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": rpc error: code = NotFound desc = could not find container \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": container with ID starting with b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.920214 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.920457 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": container with ID starting with 4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565 not found: ID does not exist" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.920471 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8912075a-c61b-4fa3-ac39-4782b79289f5-ovn-node-metrics-cert\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.920479 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} err="failed to get container status \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": rpc error: code = NotFound desc = could not find container \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": container with ID starting with 4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.920494 4772 scope.go:117] "RemoveContainer" containerID="4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.920710 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": container with ID starting with 4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e not found: ID does not exist" containerID="4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.920737 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} err="failed to get container status \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": rpc error: code = NotFound desc = could not find container \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": container with ID starting with 4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.920750 4772 scope.go:117] "RemoveContainer" containerID="7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.921038 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": container with ID starting with 7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61 not found: ID does not exist" containerID="7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.921083 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} err="failed to get container status \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": rpc error: code = NotFound desc = could not find container \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": container with ID starting with 7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.921111 4772 scope.go:117] "RemoveContainer" containerID="34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.921454 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": container with ID starting with 34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838 not found: ID does not exist" containerID="34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.921503 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} err="failed to get container status \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": rpc error: code = NotFound desc = could not find container \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": container with ID starting with 34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.921523 4772 scope.go:117] "RemoveContainer" containerID="3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.921852 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": container with ID starting with 3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4 not found: ID does not exist" containerID="3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.921874 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} err="failed to get container status \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": rpc error: code = NotFound desc = could not find container \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": container with ID starting with 3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.921888 4772 scope.go:117] "RemoveContainer" containerID="405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.922234 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": container with ID starting with 405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9 not found: ID does not exist" containerID="405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.922257 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} err="failed to get container status \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": rpc error: code = NotFound desc = could not find container \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": container with ID starting with 405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.922272 4772 scope.go:117] "RemoveContainer" containerID="da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.922616 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": container with ID starting with da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e not found: ID does not exist" containerID="da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.922639 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} err="failed to get container status \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": rpc error: code = NotFound desc = could not find container \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": container with ID starting with da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.922653 4772 scope.go:117] "RemoveContainer" containerID="bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.922863 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": container with ID starting with bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e not found: ID does not exist" containerID="bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.922888 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} err="failed to get container status \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": rpc error: code = NotFound desc = could not find container \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": container with ID starting with bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.922903 4772 scope.go:117] "RemoveContainer" containerID="560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235" Nov 22 10:51:49 crc kubenswrapper[4772]: E1122 10:51:49.923135 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": container with ID starting with 560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235 not found: ID does not exist" containerID="560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923156 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} err="failed to get container status \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": rpc error: code = NotFound desc = could not find container \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": container with ID starting with 560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923169 4772 scope.go:117] "RemoveContainer" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923378 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} err="failed to get container status \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": rpc error: code = NotFound desc = could not find container \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": container with ID starting with b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923397 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923646 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} err="failed to get container status \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": rpc error: code = NotFound desc = could not find container \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": container with ID starting with 4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923663 4772 scope.go:117] "RemoveContainer" containerID="4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923838 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} err="failed to get container status \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": rpc error: code = NotFound desc = could not find container \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": container with ID starting with 4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.923855 4772 scope.go:117] "RemoveContainer" containerID="7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924078 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} err="failed to get container status \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": rpc error: code = NotFound desc = could not find container \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": container with ID starting with 7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924099 4772 scope.go:117] "RemoveContainer" containerID="34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924282 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} err="failed to get container status \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": rpc error: code = NotFound desc = could not find container \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": container with ID starting with 34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924299 4772 scope.go:117] "RemoveContainer" containerID="3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924489 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} err="failed to get container status \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": rpc error: code = NotFound desc = could not find container \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": container with ID starting with 3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924508 4772 scope.go:117] "RemoveContainer" containerID="405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924729 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} err="failed to get container status \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": rpc error: code = NotFound desc = could not find container \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": container with ID starting with 405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924747 4772 scope.go:117] "RemoveContainer" containerID="da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924977 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} err="failed to get container status \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": rpc error: code = NotFound desc = could not find container \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": container with ID starting with da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.924998 4772 scope.go:117] "RemoveContainer" containerID="bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925254 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} err="failed to get container status \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": rpc error: code = NotFound desc = could not find container \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": container with ID starting with bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925272 4772 scope.go:117] "RemoveContainer" containerID="560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925491 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} err="failed to get container status \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": rpc error: code = NotFound desc = could not find container \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": container with ID starting with 560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925506 4772 scope.go:117] "RemoveContainer" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925733 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} err="failed to get container status \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": rpc error: code = NotFound desc = could not find container \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": container with ID starting with b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925752 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925941 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} err="failed to get container status \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": rpc error: code = NotFound desc = could not find container \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": container with ID starting with 4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.925960 4772 scope.go:117] "RemoveContainer" containerID="4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926191 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} err="failed to get container status \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": rpc error: code = NotFound desc = could not find container \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": container with ID starting with 4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926211 4772 scope.go:117] "RemoveContainer" containerID="7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926446 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} err="failed to get container status \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": rpc error: code = NotFound desc = could not find container \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": container with ID starting with 7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926473 4772 scope.go:117] "RemoveContainer" containerID="34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926670 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} err="failed to get container status \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": rpc error: code = NotFound desc = could not find container \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": container with ID starting with 34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926692 4772 scope.go:117] "RemoveContainer" containerID="3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926897 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} err="failed to get container status \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": rpc error: code = NotFound desc = could not find container \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": container with ID starting with 3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.926916 4772 scope.go:117] "RemoveContainer" containerID="405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927191 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} err="failed to get container status \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": rpc error: code = NotFound desc = could not find container \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": container with ID starting with 405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927209 4772 scope.go:117] "RemoveContainer" containerID="da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927404 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} err="failed to get container status \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": rpc error: code = NotFound desc = could not find container \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": container with ID starting with da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927429 4772 scope.go:117] "RemoveContainer" containerID="bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927661 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} err="failed to get container status \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": rpc error: code = NotFound desc = could not find container \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": container with ID starting with bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927688 4772 scope.go:117] "RemoveContainer" containerID="560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927941 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} err="failed to get container status \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": rpc error: code = NotFound desc = could not find container \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": container with ID starting with 560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927955 4772 scope.go:117] "RemoveContainer" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.927960 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76kv\" (UniqueName: \"kubernetes.io/projected/8912075a-c61b-4fa3-ac39-4782b79289f5-kube-api-access-k76kv\") pod \"ovnkube-node-prstc\" (UID: \"8912075a-c61b-4fa3-ac39-4782b79289f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928191 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} err="failed to get container status \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": rpc error: code = NotFound desc = could not find container \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": container with ID starting with b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928212 4772 scope.go:117] "RemoveContainer" containerID="4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928407 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565"} err="failed to get container status \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": rpc error: code = NotFound desc = could not find container \"4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565\": container with ID starting with 4b3f10af3e6e7fe8f38c9d6363a62d02d35e3cd0e0365a3028acfd58dc287565 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928423 4772 scope.go:117] "RemoveContainer" containerID="4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928671 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e"} err="failed to get container status \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": rpc error: code = NotFound desc = could not find container \"4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e\": container with ID starting with 4ac96e66a7d3d7899a8196d07614bccdd741145d5dc57c26b30fc4e127c6a94e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928688 4772 scope.go:117] "RemoveContainer" containerID="7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928909 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61"} err="failed to get container status \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": rpc error: code = NotFound desc = could not find container \"7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61\": container with ID starting with 7959ee06497c972cc6915e3ab426dc2bba73e58343902084895fcca45fc97a61 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.928936 4772 scope.go:117] "RemoveContainer" containerID="34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929182 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838"} err="failed to get container status \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": rpc error: code = NotFound desc = could not find container \"34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838\": container with ID starting with 34dfec56a475b9928ac7803e64d6fd305028a7f8d66647c8d06eeaa12cb73838 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929209 4772 scope.go:117] "RemoveContainer" containerID="3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929451 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4"} err="failed to get container status \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": rpc error: code = NotFound desc = could not find container \"3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4\": container with ID starting with 3372362ced27c1e5e6ed3271a6b0fd19c04ce1d12a6c0b45e60b4f19b9d9d9e4 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929473 4772 scope.go:117] "RemoveContainer" containerID="405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929696 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9"} err="failed to get container status \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": rpc error: code = NotFound desc = could not find container \"405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9\": container with ID starting with 405215cb0867af5a98dd2c0948c12867783df5de69f3e68c0c420e94bd41fbb9 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929715 4772 scope.go:117] "RemoveContainer" containerID="da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929918 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e"} err="failed to get container status \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": rpc error: code = NotFound desc = could not find container \"da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e\": container with ID starting with da55e361a3ec7dca7a00a9ccac8e72cf43be0ebb8973fcac40e3bf86dac5237e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.929934 4772 scope.go:117] "RemoveContainer" containerID="bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.930213 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e"} err="failed to get container status \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": rpc error: code = NotFound desc = could not find container \"bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e\": container with ID starting with bc7ab7764e2d893152e965392c916dff7876acb81d134cdec2615b253dbee64e not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.930230 4772 scope.go:117] "RemoveContainer" containerID="560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.930446 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235"} err="failed to get container status \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": rpc error: code = NotFound desc = could not find container \"560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235\": container with ID starting with 560cc60857ca60de3efcacf65e511e8df84348e2d415785c615be3ec1b51e235 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.930474 4772 scope.go:117] "RemoveContainer" containerID="b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.930662 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16"} err="failed to get container status \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": rpc error: code = NotFound desc = could not find container \"b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16\": container with ID starting with b357f837243332636515052b5f466b88738ee0abcf1012fcab2f395262022b16 not found: ID does not exist" Nov 22 10:51:49 crc kubenswrapper[4772]: I1122 10:51:49.951148 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:49 crc kubenswrapper[4772]: W1122 10:51:49.966253 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8912075a_c61b_4fa3_ac39_4782b79289f5.slice/crio-75197b49b16a2d2ced0613126417376d3988c5faf214ec5c81b7d86784f74e4d WatchSource:0}: Error finding container 75197b49b16a2d2ced0613126417376d3988c5faf214ec5c81b7d86784f74e4d: Status 404 returned error can't find the container with id 75197b49b16a2d2ced0613126417376d3988c5faf214ec5c81b7d86784f74e4d Nov 22 10:51:50 crc kubenswrapper[4772]: I1122 10:51:50.739765 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/2.log" Nov 22 10:51:50 crc kubenswrapper[4772]: I1122 10:51:50.740459 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/1.log" Nov 22 10:51:50 crc kubenswrapper[4772]: I1122 10:51:50.740562 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s4mvm" event={"ID":"d73fd58d-561a-4b16-9f9d-49ae966edb24","Type":"ContainerStarted","Data":"bd5da451a06652b0ee80b46c3eaca28124e7ac7df97475faa7fa044f9cc17aa5"} Nov 22 10:51:50 crc kubenswrapper[4772]: I1122 10:51:50.742282 4772 generic.go:334] "Generic (PLEG): container finished" podID="8912075a-c61b-4fa3-ac39-4782b79289f5" containerID="6152b6e37e8bcd2ab9328b3bb53476d77df1d736867629da7acb91c70e42eb9f" exitCode=0 Nov 22 10:51:50 crc kubenswrapper[4772]: I1122 10:51:50.742321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerDied","Data":"6152b6e37e8bcd2ab9328b3bb53476d77df1d736867629da7acb91c70e42eb9f"} Nov 22 10:51:50 crc kubenswrapper[4772]: I1122 10:51:50.742348 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"75197b49b16a2d2ced0613126417376d3988c5faf214ec5c81b7d86784f74e4d"} Nov 22 10:51:51 crc kubenswrapper[4772]: I1122 10:51:51.422561 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd84e05e-cfd6-46d5-bd23-30689addcd8b" path="/var/lib/kubelet/pods/fd84e05e-cfd6-46d5-bd23-30689addcd8b/volumes" Nov 22 10:51:51 crc kubenswrapper[4772]: I1122 10:51:51.766765 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"98c2b13d0d38b65090b86659c285ff3006648746870b34b06323865c93e9fcd9"} Nov 22 10:51:51 crc kubenswrapper[4772]: I1122 10:51:51.766815 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"5ecf69ce87fde69db45428ab6a9f2cd747f44d80ba5a75c81c4a9731d3ee1ee6"} Nov 22 10:51:51 crc kubenswrapper[4772]: I1122 10:51:51.766829 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"d4c51f06a4f225ec6792ba1fad42a268b826e72843609113fe424ab19a3f790d"} Nov 22 10:51:51 crc kubenswrapper[4772]: I1122 10:51:51.766844 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"e576ee37ae66620b8de95aa42b6293099b627d3364ada800d15ef36cc8429daa"} Nov 22 10:51:51 crc kubenswrapper[4772]: I1122 10:51:51.766855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"f7bb4dcf2496628f51415d6c7f125429622a26f93f4b30b215268be8ee58a76a"} Nov 22 10:51:51 crc kubenswrapper[4772]: I1122 10:51:51.766866 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"9ecfe55d0cfa01dfa8acbb3ec827c864e94e93eda12f2a09200d5ff6c945382a"} Nov 22 10:51:53 crc kubenswrapper[4772]: I1122 10:51:53.782115 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"6becf745e26c6d6596bf308ffe8be6fb67de7da97d2185519fe259e359d59324"} Nov 22 10:51:56 crc kubenswrapper[4772]: I1122 10:51:56.806094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" event={"ID":"8912075a-c61b-4fa3-ac39-4782b79289f5","Type":"ContainerStarted","Data":"ed49c5b01c40c1ca9e90c4ed0c48f6add97ae78ee955feaaed45b2e951375444"} Nov 22 10:51:56 crc kubenswrapper[4772]: I1122 10:51:56.806830 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:56 crc kubenswrapper[4772]: I1122 10:51:56.806867 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:56 crc kubenswrapper[4772]: I1122 10:51:56.848418 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:56 crc kubenswrapper[4772]: I1122 10:51:56.848956 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" podStartSLOduration=7.848931011 podStartE2EDuration="7.848931011s" podCreationTimestamp="2025-11-22 10:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:51:56.839944322 +0000 UTC m=+837.079388826" watchObservedRunningTime="2025-11-22 10:51:56.848931011 +0000 UTC m=+837.088375545" Nov 22 10:51:57 crc kubenswrapper[4772]: I1122 10:51:57.813893 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:51:57 crc kubenswrapper[4772]: I1122 10:51:57.843753 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.071500 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-swbhm"] Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.073265 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.075528 4772 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nxxqs" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.075822 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.077281 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.077658 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.082294 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-swbhm"] Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.266898 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrzn\" (UniqueName: \"kubernetes.io/projected/a0424d70-121b-4e88-ac19-da0f816c6625-kube-api-access-rgrzn\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.266965 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a0424d70-121b-4e88-ac19-da0f816c6625-node-mnt\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.266998 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a0424d70-121b-4e88-ac19-da0f816c6625-crc-storage\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.367963 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgrzn\" (UniqueName: \"kubernetes.io/projected/a0424d70-121b-4e88-ac19-da0f816c6625-kube-api-access-rgrzn\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.368007 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a0424d70-121b-4e88-ac19-da0f816c6625-node-mnt\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.368035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a0424d70-121b-4e88-ac19-da0f816c6625-crc-storage\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.368293 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a0424d70-121b-4e88-ac19-da0f816c6625-node-mnt\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.369955 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.378797 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a0424d70-121b-4e88-ac19-da0f816c6625-crc-storage\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.385689 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.396168 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.412101 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgrzn\" (UniqueName: \"kubernetes.io/projected/a0424d70-121b-4e88-ac19-da0f816c6625-kube-api-access-rgrzn\") pod \"crc-storage-crc-swbhm\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.656783 4772 scope.go:117] "RemoveContainer" containerID="3325c0083f66336c643bd867f49df279967efccdbeadef4c588e1e43fbe1c13c" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.704230 4772 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nxxqs" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.712884 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:01 crc kubenswrapper[4772]: I1122 10:52:01.837646 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s4mvm_d73fd58d-561a-4b16-9f9d-49ae966edb24/kube-multus/2.log" Nov 22 10:52:02 crc kubenswrapper[4772]: I1122 10:52:02.079034 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-swbhm"] Nov 22 10:52:02 crc kubenswrapper[4772]: W1122 10:52:02.084998 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0424d70_121b_4e88_ac19_da0f816c6625.slice/crio-d68442b6e9718feb130fd91bddf4f8189dc13b626647e030180d78bfc31e6546 WatchSource:0}: Error finding container d68442b6e9718feb130fd91bddf4f8189dc13b626647e030180d78bfc31e6546: Status 404 returned error can't find the container with id d68442b6e9718feb130fd91bddf4f8189dc13b626647e030180d78bfc31e6546 Nov 22 10:52:02 crc kubenswrapper[4772]: I1122 10:52:02.087158 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 10:52:02 crc kubenswrapper[4772]: I1122 10:52:02.843792 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-swbhm" event={"ID":"a0424d70-121b-4e88-ac19-da0f816c6625","Type":"ContainerStarted","Data":"d68442b6e9718feb130fd91bddf4f8189dc13b626647e030180d78bfc31e6546"} Nov 22 10:52:03 crc kubenswrapper[4772]: I1122 10:52:03.852451 4772 generic.go:334] "Generic (PLEG): container finished" podID="a0424d70-121b-4e88-ac19-da0f816c6625" containerID="a0aedd2e55e0259cc44970c6a4851de482dcf1e56fa55e36f617f075dff78870" exitCode=0 Nov 22 10:52:03 crc kubenswrapper[4772]: I1122 10:52:03.852548 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-swbhm" event={"ID":"a0424d70-121b-4e88-ac19-da0f816c6625","Type":"ContainerDied","Data":"a0aedd2e55e0259cc44970c6a4851de482dcf1e56fa55e36f617f075dff78870"} Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.101967 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.214170 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a0424d70-121b-4e88-ac19-da0f816c6625-node-mnt\") pod \"a0424d70-121b-4e88-ac19-da0f816c6625\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.214233 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgrzn\" (UniqueName: \"kubernetes.io/projected/a0424d70-121b-4e88-ac19-da0f816c6625-kube-api-access-rgrzn\") pod \"a0424d70-121b-4e88-ac19-da0f816c6625\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.214275 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a0424d70-121b-4e88-ac19-da0f816c6625-crc-storage\") pod \"a0424d70-121b-4e88-ac19-da0f816c6625\" (UID: \"a0424d70-121b-4e88-ac19-da0f816c6625\") " Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.214517 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0424d70-121b-4e88-ac19-da0f816c6625-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a0424d70-121b-4e88-ac19-da0f816c6625" (UID: "a0424d70-121b-4e88-ac19-da0f816c6625"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.215547 4772 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a0424d70-121b-4e88-ac19-da0f816c6625-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.221231 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0424d70-121b-4e88-ac19-da0f816c6625-kube-api-access-rgrzn" (OuterVolumeSpecName: "kube-api-access-rgrzn") pod "a0424d70-121b-4e88-ac19-da0f816c6625" (UID: "a0424d70-121b-4e88-ac19-da0f816c6625"). InnerVolumeSpecName "kube-api-access-rgrzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.231536 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0424d70-121b-4e88-ac19-da0f816c6625-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a0424d70-121b-4e88-ac19-da0f816c6625" (UID: "a0424d70-121b-4e88-ac19-da0f816c6625"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.316276 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgrzn\" (UniqueName: \"kubernetes.io/projected/a0424d70-121b-4e88-ac19-da0f816c6625-kube-api-access-rgrzn\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.316305 4772 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a0424d70-121b-4e88-ac19-da0f816c6625-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.865159 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-swbhm" event={"ID":"a0424d70-121b-4e88-ac19-da0f816c6625","Type":"ContainerDied","Data":"d68442b6e9718feb130fd91bddf4f8189dc13b626647e030180d78bfc31e6546"} Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.865233 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d68442b6e9718feb130fd91bddf4f8189dc13b626647e030180d78bfc31e6546" Nov 22 10:52:05 crc kubenswrapper[4772]: I1122 10:52:05.865192 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-swbhm" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.399744 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf"] Nov 22 10:52:13 crc kubenswrapper[4772]: E1122 10:52:13.400527 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0424d70-121b-4e88-ac19-da0f816c6625" containerName="storage" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.400543 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0424d70-121b-4e88-ac19-da0f816c6625" containerName="storage" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.400669 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0424d70-121b-4e88-ac19-da0f816c6625" containerName="storage" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.401547 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.403554 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.410124 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf"] Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.410885 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.410965 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qrp5\" (UniqueName: \"kubernetes.io/projected/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-kube-api-access-5qrp5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.412286 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.512703 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.513033 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.513408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qrp5\" (UniqueName: \"kubernetes.io/projected/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-kube-api-access-5qrp5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.513598 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.513597 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.532805 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qrp5\" (UniqueName: \"kubernetes.io/projected/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-kube-api-access-5qrp5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:13 crc kubenswrapper[4772]: I1122 10:52:13.759615 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:14 crc kubenswrapper[4772]: I1122 10:52:14.141270 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf"] Nov 22 10:52:14 crc kubenswrapper[4772]: W1122 10:52:14.148233 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7219a3f_650e_4d7a_b44e_f48aeb8e710b.slice/crio-37544702edaf263728c4832fcd2fe00bed1417c80bc755469687ea925980f76a WatchSource:0}: Error finding container 37544702edaf263728c4832fcd2fe00bed1417c80bc755469687ea925980f76a: Status 404 returned error can't find the container with id 37544702edaf263728c4832fcd2fe00bed1417c80bc755469687ea925980f76a Nov 22 10:52:14 crc kubenswrapper[4772]: I1122 10:52:14.910492 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerID="994d105d2b15ad7d1e81bfd8ec8ddda584a6b03d15346f390e65e8f6452b09f7" exitCode=0 Nov 22 10:52:14 crc kubenswrapper[4772]: I1122 10:52:14.910549 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" event={"ID":"a7219a3f-650e-4d7a-b44e-f48aeb8e710b","Type":"ContainerDied","Data":"994d105d2b15ad7d1e81bfd8ec8ddda584a6b03d15346f390e65e8f6452b09f7"} Nov 22 10:52:14 crc kubenswrapper[4772]: I1122 10:52:14.910577 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" event={"ID":"a7219a3f-650e-4d7a-b44e-f48aeb8e710b","Type":"ContainerStarted","Data":"37544702edaf263728c4832fcd2fe00bed1417c80bc755469687ea925980f76a"} Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.663392 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6jft"] Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.669115 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.672950 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6jft"] Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.839844 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-utilities\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.839916 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-catalog-content\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.839932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrksx\" (UniqueName: \"kubernetes.io/projected/6531269d-299f-4528-a40a-42976e6fc55e-kube-api-access-nrksx\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.940884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-utilities\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.940990 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrksx\" (UniqueName: \"kubernetes.io/projected/6531269d-299f-4528-a40a-42976e6fc55e-kube-api-access-nrksx\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.941017 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-catalog-content\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.941497 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-catalog-content\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.941511 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-utilities\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:15 crc kubenswrapper[4772]: I1122 10:52:15.963674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrksx\" (UniqueName: \"kubernetes.io/projected/6531269d-299f-4528-a40a-42976e6fc55e-kube-api-access-nrksx\") pod \"redhat-operators-c6jft\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:16 crc kubenswrapper[4772]: I1122 10:52:16.038661 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:16 crc kubenswrapper[4772]: I1122 10:52:16.431155 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6jft"] Nov 22 10:52:16 crc kubenswrapper[4772]: I1122 10:52:16.921703 4772 generic.go:334] "Generic (PLEG): container finished" podID="6531269d-299f-4528-a40a-42976e6fc55e" containerID="6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796" exitCode=0 Nov 22 10:52:16 crc kubenswrapper[4772]: I1122 10:52:16.921777 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6jft" event={"ID":"6531269d-299f-4528-a40a-42976e6fc55e","Type":"ContainerDied","Data":"6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796"} Nov 22 10:52:16 crc kubenswrapper[4772]: I1122 10:52:16.921806 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6jft" event={"ID":"6531269d-299f-4528-a40a-42976e6fc55e","Type":"ContainerStarted","Data":"1e98424e2e0f7fb63970c3d2ae3bc4ea9bd354a2cd79fc7315c30eae39dccb6f"} Nov 22 10:52:16 crc kubenswrapper[4772]: I1122 10:52:16.923569 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerID="9e966786553b44c0094c9af2e0cb56893302b4312933c05eff161b5bae237587" exitCode=0 Nov 22 10:52:16 crc kubenswrapper[4772]: I1122 10:52:16.923597 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" event={"ID":"a7219a3f-650e-4d7a-b44e-f48aeb8e710b","Type":"ContainerDied","Data":"9e966786553b44c0094c9af2e0cb56893302b4312933c05eff161b5bae237587"} Nov 22 10:52:17 crc kubenswrapper[4772]: I1122 10:52:17.929924 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6jft" event={"ID":"6531269d-299f-4528-a40a-42976e6fc55e","Type":"ContainerStarted","Data":"5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1"} Nov 22 10:52:17 crc kubenswrapper[4772]: I1122 10:52:17.931756 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerID="8416c6fcf94a785a57b5d876ea3429e37c58040b9eb758b9c36a85809f3eca98" exitCode=0 Nov 22 10:52:17 crc kubenswrapper[4772]: I1122 10:52:17.931781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" event={"ID":"a7219a3f-650e-4d7a-b44e-f48aeb8e710b","Type":"ContainerDied","Data":"8416c6fcf94a785a57b5d876ea3429e37c58040b9eb758b9c36a85809f3eca98"} Nov 22 10:52:18 crc kubenswrapper[4772]: I1122 10:52:18.939246 4772 generic.go:334] "Generic (PLEG): container finished" podID="6531269d-299f-4528-a40a-42976e6fc55e" containerID="5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1" exitCode=0 Nov 22 10:52:18 crc kubenswrapper[4772]: I1122 10:52:18.939339 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6jft" event={"ID":"6531269d-299f-4528-a40a-42976e6fc55e","Type":"ContainerDied","Data":"5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1"} Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.173152 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.278499 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-util\") pod \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.278574 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qrp5\" (UniqueName: \"kubernetes.io/projected/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-kube-api-access-5qrp5\") pod \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.278646 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-bundle\") pod \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\" (UID: \"a7219a3f-650e-4d7a-b44e-f48aeb8e710b\") " Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.279119 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-bundle" (OuterVolumeSpecName: "bundle") pod "a7219a3f-650e-4d7a-b44e-f48aeb8e710b" (UID: "a7219a3f-650e-4d7a-b44e-f48aeb8e710b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.283913 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-kube-api-access-5qrp5" (OuterVolumeSpecName: "kube-api-access-5qrp5") pod "a7219a3f-650e-4d7a-b44e-f48aeb8e710b" (UID: "a7219a3f-650e-4d7a-b44e-f48aeb8e710b"). InnerVolumeSpecName "kube-api-access-5qrp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.292290 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-util" (OuterVolumeSpecName: "util") pod "a7219a3f-650e-4d7a-b44e-f48aeb8e710b" (UID: "a7219a3f-650e-4d7a-b44e-f48aeb8e710b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.380393 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-util\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.380422 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qrp5\" (UniqueName: \"kubernetes.io/projected/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-kube-api-access-5qrp5\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.380433 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7219a3f-650e-4d7a-b44e-f48aeb8e710b-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.965520 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6jft" event={"ID":"6531269d-299f-4528-a40a-42976e6fc55e","Type":"ContainerStarted","Data":"11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b"} Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.967985 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" event={"ID":"a7219a3f-650e-4d7a-b44e-f48aeb8e710b","Type":"ContainerDied","Data":"37544702edaf263728c4832fcd2fe00bed1417c80bc755469687ea925980f76a"} Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.968129 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37544702edaf263728c4832fcd2fe00bed1417c80bc755469687ea925980f76a" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.968228 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.984481 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prstc" Nov 22 10:52:19 crc kubenswrapper[4772]: I1122 10:52:19.988715 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6jft" podStartSLOduration=2.5671743989999998 podStartE2EDuration="4.988665426s" podCreationTimestamp="2025-11-22 10:52:15 +0000 UTC" firstStartedPulling="2025-11-22 10:52:16.924497447 +0000 UTC m=+857.163941961" lastFinishedPulling="2025-11-22 10:52:19.345988494 +0000 UTC m=+859.585432988" observedRunningTime="2025-11-22 10:52:19.986250064 +0000 UTC m=+860.225694558" watchObservedRunningTime="2025-11-22 10:52:19.988665426 +0000 UTC m=+860.228109940" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.753318 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-596nz"] Nov 22 10:52:21 crc kubenswrapper[4772]: E1122 10:52:21.753660 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerName="extract" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.753677 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerName="extract" Nov 22 10:52:21 crc kubenswrapper[4772]: E1122 10:52:21.753690 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerName="pull" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.753698 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerName="pull" Nov 22 10:52:21 crc kubenswrapper[4772]: E1122 10:52:21.753720 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerName="util" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.753727 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerName="util" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.753836 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7219a3f-650e-4d7a-b44e-f48aeb8e710b" containerName="extract" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.754272 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.756270 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.756418 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.759835 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-kfl77" Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.769353 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-596nz"] Nov 22 10:52:21 crc kubenswrapper[4772]: I1122 10:52:21.906353 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmqx\" (UniqueName: \"kubernetes.io/projected/5e1f7834-0fb0-4790-836b-5bb5ca61bec2-kube-api-access-slmqx\") pod \"nmstate-operator-557fdffb88-596nz\" (UID: \"5e1f7834-0fb0-4790-836b-5bb5ca61bec2\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" Nov 22 10:52:22 crc kubenswrapper[4772]: I1122 10:52:22.007748 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slmqx\" (UniqueName: \"kubernetes.io/projected/5e1f7834-0fb0-4790-836b-5bb5ca61bec2-kube-api-access-slmqx\") pod \"nmstate-operator-557fdffb88-596nz\" (UID: \"5e1f7834-0fb0-4790-836b-5bb5ca61bec2\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" Nov 22 10:52:22 crc kubenswrapper[4772]: I1122 10:52:22.039121 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slmqx\" (UniqueName: \"kubernetes.io/projected/5e1f7834-0fb0-4790-836b-5bb5ca61bec2-kube-api-access-slmqx\") pod \"nmstate-operator-557fdffb88-596nz\" (UID: \"5e1f7834-0fb0-4790-836b-5bb5ca61bec2\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" Nov 22 10:52:22 crc kubenswrapper[4772]: I1122 10:52:22.072465 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" Nov 22 10:52:22 crc kubenswrapper[4772]: I1122 10:52:22.482117 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-596nz"] Nov 22 10:52:22 crc kubenswrapper[4772]: I1122 10:52:22.981653 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" event={"ID":"5e1f7834-0fb0-4790-836b-5bb5ca61bec2","Type":"ContainerStarted","Data":"83a4ebfde3b05359c70f81314e4ff5cd1c1ef8bb508de842a961fbd873bbcabf"} Nov 22 10:52:25 crc kubenswrapper[4772]: I1122 10:52:25.997229 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" event={"ID":"5e1f7834-0fb0-4790-836b-5bb5ca61bec2","Type":"ContainerStarted","Data":"02a0b30c53727ab8a38941ae6dfa33bcb5aa66c736a8f56302fa771343d01e83"} Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.014253 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-596nz" podStartSLOduration=2.496652021 podStartE2EDuration="5.014233275s" podCreationTimestamp="2025-11-22 10:52:21 +0000 UTC" firstStartedPulling="2025-11-22 10:52:22.49254644 +0000 UTC m=+862.731990924" lastFinishedPulling="2025-11-22 10:52:25.010127684 +0000 UTC m=+865.249572178" observedRunningTime="2025-11-22 10:52:26.010378896 +0000 UTC m=+866.249823420" watchObservedRunningTime="2025-11-22 10:52:26.014233275 +0000 UTC m=+866.253677789" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.039070 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.039142 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.081716 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.243939 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vsgxf"] Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.245210 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.259272 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsgxf"] Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.393875 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6lk\" (UniqueName: \"kubernetes.io/projected/1a50ad9a-4d55-4bf4-88f7-8c3595712616-kube-api-access-7h6lk\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.393936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-utilities\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.394221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-catalog-content\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.495306 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-utilities\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.495409 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-catalog-content\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.495504 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h6lk\" (UniqueName: \"kubernetes.io/projected/1a50ad9a-4d55-4bf4-88f7-8c3595712616-kube-api-access-7h6lk\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.495835 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-utilities\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.495954 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-catalog-content\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.515328 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h6lk\" (UniqueName: \"kubernetes.io/projected/1a50ad9a-4d55-4bf4-88f7-8c3595712616-kube-api-access-7h6lk\") pod \"community-operators-vsgxf\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.594925 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.880030 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsgxf"] Nov 22 10:52:26 crc kubenswrapper[4772]: W1122 10:52:26.887395 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a50ad9a_4d55_4bf4_88f7_8c3595712616.slice/crio-f2ed7126396e5e42c06d0c0e6d1293cdf4572ca59a1843ec31ead1f1d13d947e WatchSource:0}: Error finding container f2ed7126396e5e42c06d0c0e6d1293cdf4572ca59a1843ec31ead1f1d13d947e: Status 404 returned error can't find the container with id f2ed7126396e5e42c06d0c0e6d1293cdf4572ca59a1843ec31ead1f1d13d947e Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.970157 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4"] Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.971461 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.977149 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wgbvb" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.978089 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv"] Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.978948 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:26 crc kubenswrapper[4772]: I1122 10:52:26.980587 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.007768 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b41327fc-c766-4258-8252-da74057be64a-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5lsvv\" (UID: \"b41327fc-c766-4258-8252-da74057be64a\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.007938 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b5lh\" (UniqueName: \"kubernetes.io/projected/f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b-kube-api-access-2b5lh\") pod \"nmstate-metrics-5dcf9c57c5-mdxp4\" (UID: \"f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.007981 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lfm\" (UniqueName: \"kubernetes.io/projected/b41327fc-c766-4258-8252-da74057be64a-kube-api-access-h9lfm\") pod \"nmstate-webhook-6b89b748d8-5lsvv\" (UID: \"b41327fc-c766-4258-8252-da74057be64a\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.008601 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsgxf" event={"ID":"1a50ad9a-4d55-4bf4-88f7-8c3595712616","Type":"ContainerStarted","Data":"f2ed7126396e5e42c06d0c0e6d1293cdf4572ca59a1843ec31ead1f1d13d947e"} Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.026187 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9lvgv"] Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.027011 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.027505 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv"] Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.059216 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4"] Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.062761 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.109610 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b41327fc-c766-4258-8252-da74057be64a-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5lsvv\" (UID: \"b41327fc-c766-4258-8252-da74057be64a\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.109684 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-dbus-socket\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.109714 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-ovs-socket\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.109739 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b5lh\" (UniqueName: \"kubernetes.io/projected/f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b-kube-api-access-2b5lh\") pod \"nmstate-metrics-5dcf9c57c5-mdxp4\" (UID: \"f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.109770 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f995v\" (UniqueName: \"kubernetes.io/projected/5a949a00-9ae8-4c80-a23b-e4627333061a-kube-api-access-f995v\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: E1122 10:52:27.109915 4772 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 22 10:52:27 crc kubenswrapper[4772]: E1122 10:52:27.109989 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b41327fc-c766-4258-8252-da74057be64a-tls-key-pair podName:b41327fc-c766-4258-8252-da74057be64a nodeName:}" failed. No retries permitted until 2025-11-22 10:52:27.609970359 +0000 UTC m=+867.849414853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b41327fc-c766-4258-8252-da74057be64a-tls-key-pair") pod "nmstate-webhook-6b89b748d8-5lsvv" (UID: "b41327fc-c766-4258-8252-da74057be64a") : secret "openshift-nmstate-webhook" not found Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.110180 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-nmstate-lock\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.110366 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9lfm\" (UniqueName: \"kubernetes.io/projected/b41327fc-c766-4258-8252-da74057be64a-kube-api-access-h9lfm\") pod \"nmstate-webhook-6b89b748d8-5lsvv\" (UID: \"b41327fc-c766-4258-8252-da74057be64a\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.135062 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b5lh\" (UniqueName: \"kubernetes.io/projected/f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b-kube-api-access-2b5lh\") pod \"nmstate-metrics-5dcf9c57c5-mdxp4\" (UID: \"f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.136207 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f"] Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.136904 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.140991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9lfm\" (UniqueName: \"kubernetes.io/projected/b41327fc-c766-4258-8252-da74057be64a-kube-api-access-h9lfm\") pod \"nmstate-webhook-6b89b748d8-5lsvv\" (UID: \"b41327fc-c766-4258-8252-da74057be64a\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.152665 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.152976 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-d9t7m" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.153156 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.157998 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f"] Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212170 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-dbus-socket\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212233 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-ovs-socket\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afb51de5-31d1-4cb9-8590-ab7a8a74360a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212297 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f995v\" (UniqueName: \"kubernetes.io/projected/5a949a00-9ae8-4c80-a23b-e4627333061a-kube-api-access-f995v\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-nmstate-lock\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212342 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-ovs-socket\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212386 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh57k\" (UniqueName: \"kubernetes.io/projected/afb51de5-31d1-4cb9-8590-ab7a8a74360a-kube-api-access-qh57k\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212424 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/afb51de5-31d1-4cb9-8590-ab7a8a74360a-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212588 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-nmstate-lock\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.212653 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5a949a00-9ae8-4c80-a23b-e4627333061a-dbus-socket\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.233844 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f995v\" (UniqueName: \"kubernetes.io/projected/5a949a00-9ae8-4c80-a23b-e4627333061a-kube-api-access-f995v\") pod \"nmstate-handler-9lvgv\" (UID: \"5a949a00-9ae8-4c80-a23b-e4627333061a\") " pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.295713 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.313898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afb51de5-31d1-4cb9-8590-ab7a8a74360a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.313969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh57k\" (UniqueName: \"kubernetes.io/projected/afb51de5-31d1-4cb9-8590-ab7a8a74360a-kube-api-access-qh57k\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.314001 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/afb51de5-31d1-4cb9-8590-ab7a8a74360a-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.314802 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/afb51de5-31d1-4cb9-8590-ab7a8a74360a-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: E1122 10:52:27.314880 4772 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 22 10:52:27 crc kubenswrapper[4772]: E1122 10:52:27.314923 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/afb51de5-31d1-4cb9-8590-ab7a8a74360a-plugin-serving-cert podName:afb51de5-31d1-4cb9-8590-ab7a8a74360a nodeName:}" failed. No retries permitted until 2025-11-22 10:52:27.814910082 +0000 UTC m=+868.054354576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/afb51de5-31d1-4cb9-8590-ab7a8a74360a-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-r7k6f" (UID: "afb51de5-31d1-4cb9-8590-ab7a8a74360a") : secret "plugin-serving-cert" not found Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.347379 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.348916 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh57k\" (UniqueName: \"kubernetes.io/projected/afb51de5-31d1-4cb9-8590-ab7a8a74360a-kube-api-access-qh57k\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.352808 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-657b696dd9-fqg5w"] Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.353487 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.370154 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-657b696dd9-fqg5w"] Nov 22 10:52:27 crc kubenswrapper[4772]: W1122 10:52:27.407482 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a949a00_9ae8_4c80_a23b_e4627333061a.slice/crio-594cd3dead106b0009ce3e39d7e04f65e3c197b5f303a6a15e2aac856b505426 WatchSource:0}: Error finding container 594cd3dead106b0009ce3e39d7e04f65e3c197b5f303a6a15e2aac856b505426: Status 404 returned error can't find the container with id 594cd3dead106b0009ce3e39d7e04f65e3c197b5f303a6a15e2aac856b505426 Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.427652 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-config\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.427708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-serving-cert\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.427749 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-service-ca\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.427838 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8zv9\" (UniqueName: \"kubernetes.io/projected/39732bcc-2aa2-40da-86d0-fdd19efbb837-kube-api-access-r8zv9\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.427862 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-oauth-serving-cert\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.427875 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-oauth-config\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.427888 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-trusted-ca-bundle\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.529661 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8zv9\" (UniqueName: \"kubernetes.io/projected/39732bcc-2aa2-40da-86d0-fdd19efbb837-kube-api-access-r8zv9\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.529734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-oauth-config\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.529752 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-trusted-ca-bundle\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.529767 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-oauth-serving-cert\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.529813 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-config\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.529841 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-serving-cert\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.529892 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-service-ca\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.530857 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-service-ca\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.530979 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-config\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.531290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-oauth-serving-cert\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.531859 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39732bcc-2aa2-40da-86d0-fdd19efbb837-trusted-ca-bundle\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.537553 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-serving-cert\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.539590 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39732bcc-2aa2-40da-86d0-fdd19efbb837-console-oauth-config\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.547350 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8zv9\" (UniqueName: \"kubernetes.io/projected/39732bcc-2aa2-40da-86d0-fdd19efbb837-kube-api-access-r8zv9\") pod \"console-657b696dd9-fqg5w\" (UID: \"39732bcc-2aa2-40da-86d0-fdd19efbb837\") " pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.631107 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b41327fc-c766-4258-8252-da74057be64a-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5lsvv\" (UID: \"b41327fc-c766-4258-8252-da74057be64a\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.634285 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b41327fc-c766-4258-8252-da74057be64a-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5lsvv\" (UID: \"b41327fc-c766-4258-8252-da74057be64a\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.709799 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.725153 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4"] Nov 22 10:52:27 crc kubenswrapper[4772]: W1122 10:52:27.730399 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7639cc0_dabf_4a96_90bc_dc4e3eda5f5b.slice/crio-a7643409adb9d51c1adb4d70175d4bcdcadffe4b84732746d8c9d092609b18ab WatchSource:0}: Error finding container a7643409adb9d51c1adb4d70175d4bcdcadffe4b84732746d8c9d092609b18ab: Status 404 returned error can't find the container with id a7643409adb9d51c1adb4d70175d4bcdcadffe4b84732746d8c9d092609b18ab Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.833630 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afb51de5-31d1-4cb9-8590-ab7a8a74360a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.837591 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/afb51de5-31d1-4cb9-8590-ab7a8a74360a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-r7k6f\" (UID: \"afb51de5-31d1-4cb9-8590-ab7a8a74360a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:27 crc kubenswrapper[4772]: I1122 10:52:27.902720 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.019983 4772 generic.go:334] "Generic (PLEG): container finished" podID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerID="fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1" exitCode=0 Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.020652 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsgxf" event={"ID":"1a50ad9a-4d55-4bf4-88f7-8c3595712616","Type":"ContainerDied","Data":"fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1"} Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.022815 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" event={"ID":"f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b","Type":"ContainerStarted","Data":"a7643409adb9d51c1adb4d70175d4bcdcadffe4b84732746d8c9d092609b18ab"} Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.024927 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9lvgv" event={"ID":"5a949a00-9ae8-4c80-a23b-e4627333061a","Type":"ContainerStarted","Data":"594cd3dead106b0009ce3e39d7e04f65e3c197b5f303a6a15e2aac856b505426"} Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.069609 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.086241 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-657b696dd9-fqg5w"] Nov 22 10:52:28 crc kubenswrapper[4772]: W1122 10:52:28.094192 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39732bcc_2aa2_40da_86d0_fdd19efbb837.slice/crio-906bf74d38f319889dce6b4b23705e772ae50499f9e9f785581114275d2215ce WatchSource:0}: Error finding container 906bf74d38f319889dce6b4b23705e772ae50499f9e9f785581114275d2215ce: Status 404 returned error can't find the container with id 906bf74d38f319889dce6b4b23705e772ae50499f9e9f785581114275d2215ce Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.342799 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv"] Nov 22 10:52:28 crc kubenswrapper[4772]: W1122 10:52:28.351790 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb41327fc_c766_4258_8252_da74057be64a.slice/crio-ddbb792c3b44d5fe042d4fbaf589f84bb14e8bb5d13366b41c99f9fd27614666 WatchSource:0}: Error finding container ddbb792c3b44d5fe042d4fbaf589f84bb14e8bb5d13366b41c99f9fd27614666: Status 404 returned error can't find the container with id ddbb792c3b44d5fe042d4fbaf589f84bb14e8bb5d13366b41c99f9fd27614666 Nov 22 10:52:28 crc kubenswrapper[4772]: I1122 10:52:28.453027 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f"] Nov 22 10:52:28 crc kubenswrapper[4772]: W1122 10:52:28.463390 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafb51de5_31d1_4cb9_8590_ab7a8a74360a.slice/crio-2c98eccbc2e0188d77d60f78c301ad73052878e63d3c47d85f66589a107904b3 WatchSource:0}: Error finding container 2c98eccbc2e0188d77d60f78c301ad73052878e63d3c47d85f66589a107904b3: Status 404 returned error can't find the container with id 2c98eccbc2e0188d77d60f78c301ad73052878e63d3c47d85f66589a107904b3 Nov 22 10:52:29 crc kubenswrapper[4772]: I1122 10:52:29.034864 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-657b696dd9-fqg5w" event={"ID":"39732bcc-2aa2-40da-86d0-fdd19efbb837","Type":"ContainerStarted","Data":"1c74daed5de1f0901244ee58955a8455d61c458c038257e03fea9db74ce988c3"} Nov 22 10:52:29 crc kubenswrapper[4772]: I1122 10:52:29.035342 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-657b696dd9-fqg5w" event={"ID":"39732bcc-2aa2-40da-86d0-fdd19efbb837","Type":"ContainerStarted","Data":"906bf74d38f319889dce6b4b23705e772ae50499f9e9f785581114275d2215ce"} Nov 22 10:52:29 crc kubenswrapper[4772]: I1122 10:52:29.035996 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" event={"ID":"afb51de5-31d1-4cb9-8590-ab7a8a74360a","Type":"ContainerStarted","Data":"2c98eccbc2e0188d77d60f78c301ad73052878e63d3c47d85f66589a107904b3"} Nov 22 10:52:29 crc kubenswrapper[4772]: I1122 10:52:29.037120 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" event={"ID":"b41327fc-c766-4258-8252-da74057be64a","Type":"ContainerStarted","Data":"ddbb792c3b44d5fe042d4fbaf589f84bb14e8bb5d13366b41c99f9fd27614666"} Nov 22 10:52:29 crc kubenswrapper[4772]: I1122 10:52:29.058191 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-657b696dd9-fqg5w" podStartSLOduration=2.058172945 podStartE2EDuration="2.058172945s" podCreationTimestamp="2025-11-22 10:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:52:29.055725822 +0000 UTC m=+869.295170316" watchObservedRunningTime="2025-11-22 10:52:29.058172945 +0000 UTC m=+869.297617439" Nov 22 10:52:29 crc kubenswrapper[4772]: I1122 10:52:29.640675 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6jft"] Nov 22 10:52:29 crc kubenswrapper[4772]: I1122 10:52:29.640909 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c6jft" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="registry-server" containerID="cri-o://11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b" gracePeriod=2 Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.029295 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.057440 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" event={"ID":"b41327fc-c766-4258-8252-da74057be64a","Type":"ContainerStarted","Data":"e64c07ef3777c3148570199b71710b91d5af5e5f0d8096b9a71ec2c5ac60369a"} Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.057586 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.065939 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsgxf" event={"ID":"1a50ad9a-4d55-4bf4-88f7-8c3595712616","Type":"ContainerStarted","Data":"d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b"} Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.068510 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-utilities\") pod \"6531269d-299f-4528-a40a-42976e6fc55e\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.068589 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrksx\" (UniqueName: \"kubernetes.io/projected/6531269d-299f-4528-a40a-42976e6fc55e-kube-api-access-nrksx\") pod \"6531269d-299f-4528-a40a-42976e6fc55e\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.068640 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-catalog-content\") pod \"6531269d-299f-4528-a40a-42976e6fc55e\" (UID: \"6531269d-299f-4528-a40a-42976e6fc55e\") " Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.069644 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-utilities" (OuterVolumeSpecName: "utilities") pod "6531269d-299f-4528-a40a-42976e6fc55e" (UID: "6531269d-299f-4528-a40a-42976e6fc55e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.072306 4772 generic.go:334] "Generic (PLEG): container finished" podID="6531269d-299f-4528-a40a-42976e6fc55e" containerID="11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b" exitCode=0 Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.072450 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6jft" event={"ID":"6531269d-299f-4528-a40a-42976e6fc55e","Type":"ContainerDied","Data":"11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b"} Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.072491 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6jft" event={"ID":"6531269d-299f-4528-a40a-42976e6fc55e","Type":"ContainerDied","Data":"1e98424e2e0f7fb63970c3d2ae3bc4ea9bd354a2cd79fc7315c30eae39dccb6f"} Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.072534 4772 scope.go:117] "RemoveContainer" containerID="11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.072765 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6jft" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.080204 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6531269d-299f-4528-a40a-42976e6fc55e-kube-api-access-nrksx" (OuterVolumeSpecName: "kube-api-access-nrksx") pod "6531269d-299f-4528-a40a-42976e6fc55e" (UID: "6531269d-299f-4528-a40a-42976e6fc55e"). InnerVolumeSpecName "kube-api-access-nrksx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.082166 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" podStartSLOduration=2.685283005 podStartE2EDuration="4.082140784s" podCreationTimestamp="2025-11-22 10:52:26 +0000 UTC" firstStartedPulling="2025-11-22 10:52:28.354143742 +0000 UTC m=+868.593588236" lastFinishedPulling="2025-11-22 10:52:29.751001521 +0000 UTC m=+869.990446015" observedRunningTime="2025-11-22 10:52:30.080623085 +0000 UTC m=+870.320067589" watchObservedRunningTime="2025-11-22 10:52:30.082140784 +0000 UTC m=+870.321585278" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.082724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" event={"ID":"f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b","Type":"ContainerStarted","Data":"25b9b3e5b93218e0c1b167a8f60be3bd77f3b52eac9b873048bed68920b57a04"} Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.114178 4772 scope.go:117] "RemoveContainer" containerID="5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.151161 4772 scope.go:117] "RemoveContainer" containerID="6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.170367 4772 scope.go:117] "RemoveContainer" containerID="11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.171116 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrksx\" (UniqueName: \"kubernetes.io/projected/6531269d-299f-4528-a40a-42976e6fc55e-kube-api-access-nrksx\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.171146 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:30 crc kubenswrapper[4772]: E1122 10:52:30.173308 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b\": container with ID starting with 11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b not found: ID does not exist" containerID="11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.173367 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b"} err="failed to get container status \"11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b\": rpc error: code = NotFound desc = could not find container \"11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b\": container with ID starting with 11e09d32af2ef35b6af7304996479aa95eacd70f46584876db7e43e8eae8774b not found: ID does not exist" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.173400 4772 scope.go:117] "RemoveContainer" containerID="5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1" Nov 22 10:52:30 crc kubenswrapper[4772]: E1122 10:52:30.173907 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1\": container with ID starting with 5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1 not found: ID does not exist" containerID="5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.173957 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1"} err="failed to get container status \"5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1\": rpc error: code = NotFound desc = could not find container \"5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1\": container with ID starting with 5a929927164088d2c27e7a0d23f81da74d3d86972d8ffbfe3ee12540bb8ee1b1 not found: ID does not exist" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.173976 4772 scope.go:117] "RemoveContainer" containerID="6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796" Nov 22 10:52:30 crc kubenswrapper[4772]: E1122 10:52:30.174556 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796\": container with ID starting with 6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796 not found: ID does not exist" containerID="6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.174581 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796"} err="failed to get container status \"6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796\": rpc error: code = NotFound desc = could not find container \"6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796\": container with ID starting with 6efe52b6cc1ab5bd427d99ffa7f5c3e7b9ac11b25f879f1e9ed3ee7f36033796 not found: ID does not exist" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.191530 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6531269d-299f-4528-a40a-42976e6fc55e" (UID: "6531269d-299f-4528-a40a-42976e6fc55e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.272073 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531269d-299f-4528-a40a-42976e6fc55e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.398286 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6jft"] Nov 22 10:52:30 crc kubenswrapper[4772]: I1122 10:52:30.403353 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c6jft"] Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.093288 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9lvgv" event={"ID":"5a949a00-9ae8-4c80-a23b-e4627333061a","Type":"ContainerStarted","Data":"c26118afe290538e01426d717acfa33b44385348a3117d050fde28d56d0b19b8"} Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.093657 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.097651 4772 generic.go:334] "Generic (PLEG): container finished" podID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerID="d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b" exitCode=0 Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.097743 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsgxf" event={"ID":"1a50ad9a-4d55-4bf4-88f7-8c3595712616","Type":"ContainerDied","Data":"d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b"} Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.103351 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" event={"ID":"afb51de5-31d1-4cb9-8590-ab7a8a74360a","Type":"ContainerStarted","Data":"dafe45e984374c8ebdb98c4d57377e6e211245e94505c91de3962e650abf7123"} Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.127145 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9lvgv" podStartSLOduration=2.787930689 podStartE2EDuration="5.127120479s" podCreationTimestamp="2025-11-22 10:52:26 +0000 UTC" firstStartedPulling="2025-11-22 10:52:27.410268342 +0000 UTC m=+867.649712836" lastFinishedPulling="2025-11-22 10:52:29.749458132 +0000 UTC m=+869.988902626" observedRunningTime="2025-11-22 10:52:31.110658468 +0000 UTC m=+871.350102982" watchObservedRunningTime="2025-11-22 10:52:31.127120479 +0000 UTC m=+871.366564983" Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.137120 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-r7k6f" podStartSLOduration=1.679313381 podStartE2EDuration="4.137101984s" podCreationTimestamp="2025-11-22 10:52:27 +0000 UTC" firstStartedPulling="2025-11-22 10:52:28.465799398 +0000 UTC m=+868.705243892" lastFinishedPulling="2025-11-22 10:52:30.923588001 +0000 UTC m=+871.163032495" observedRunningTime="2025-11-22 10:52:31.125353114 +0000 UTC m=+871.364797618" watchObservedRunningTime="2025-11-22 10:52:31.137101984 +0000 UTC m=+871.376546478" Nov 22 10:52:31 crc kubenswrapper[4772]: I1122 10:52:31.420576 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6531269d-299f-4528-a40a-42976e6fc55e" path="/var/lib/kubelet/pods/6531269d-299f-4528-a40a-42976e6fc55e/volumes" Nov 22 10:52:32 crc kubenswrapper[4772]: I1122 10:52:32.109905 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsgxf" event={"ID":"1a50ad9a-4d55-4bf4-88f7-8c3595712616","Type":"ContainerStarted","Data":"b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf"} Nov 22 10:52:32 crc kubenswrapper[4772]: I1122 10:52:32.112623 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" event={"ID":"f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b","Type":"ContainerStarted","Data":"3fe0077be3208034a68773491314ecf140654fbaed9120bc37f9acbe46d7e369"} Nov 22 10:52:32 crc kubenswrapper[4772]: I1122 10:52:32.128972 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vsgxf" podStartSLOduration=2.238235236 podStartE2EDuration="6.128953022s" podCreationTimestamp="2025-11-22 10:52:26 +0000 UTC" firstStartedPulling="2025-11-22 10:52:28.021778348 +0000 UTC m=+868.261222842" lastFinishedPulling="2025-11-22 10:52:31.912496134 +0000 UTC m=+872.151940628" observedRunningTime="2025-11-22 10:52:32.126707145 +0000 UTC m=+872.366151659" watchObservedRunningTime="2025-11-22 10:52:32.128953022 +0000 UTC m=+872.368397526" Nov 22 10:52:32 crc kubenswrapper[4772]: I1122 10:52:32.143033 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-mdxp4" podStartSLOduration=1.962459771 podStartE2EDuration="6.143012192s" podCreationTimestamp="2025-11-22 10:52:26 +0000 UTC" firstStartedPulling="2025-11-22 10:52:27.732943078 +0000 UTC m=+867.972387572" lastFinishedPulling="2025-11-22 10:52:31.913495499 +0000 UTC m=+872.152939993" observedRunningTime="2025-11-22 10:52:32.141788 +0000 UTC m=+872.381232534" watchObservedRunningTime="2025-11-22 10:52:32.143012192 +0000 UTC m=+872.382456686" Nov 22 10:52:36 crc kubenswrapper[4772]: I1122 10:52:36.595102 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:36 crc kubenswrapper[4772]: I1122 10:52:36.595186 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:36 crc kubenswrapper[4772]: I1122 10:52:36.639713 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:37 crc kubenswrapper[4772]: I1122 10:52:37.171530 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:37 crc kubenswrapper[4772]: I1122 10:52:37.208229 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsgxf"] Nov 22 10:52:37 crc kubenswrapper[4772]: I1122 10:52:37.374003 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9lvgv" Nov 22 10:52:37 crc kubenswrapper[4772]: I1122 10:52:37.711038 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:37 crc kubenswrapper[4772]: I1122 10:52:37.712302 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:37 crc kubenswrapper[4772]: I1122 10:52:37.716407 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:38 crc kubenswrapper[4772]: I1122 10:52:38.147321 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-657b696dd9-fqg5w" Nov 22 10:52:38 crc kubenswrapper[4772]: I1122 10:52:38.202621 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ld9hg"] Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.147131 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vsgxf" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="registry-server" containerID="cri-o://b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf" gracePeriod=2 Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.521907 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.625812 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-utilities\") pod \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.625883 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h6lk\" (UniqueName: \"kubernetes.io/projected/1a50ad9a-4d55-4bf4-88f7-8c3595712616-kube-api-access-7h6lk\") pod \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.625985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-catalog-content\") pod \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\" (UID: \"1a50ad9a-4d55-4bf4-88f7-8c3595712616\") " Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.626628 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-utilities" (OuterVolumeSpecName: "utilities") pod "1a50ad9a-4d55-4bf4-88f7-8c3595712616" (UID: "1a50ad9a-4d55-4bf4-88f7-8c3595712616"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.634408 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a50ad9a-4d55-4bf4-88f7-8c3595712616-kube-api-access-7h6lk" (OuterVolumeSpecName: "kube-api-access-7h6lk") pod "1a50ad9a-4d55-4bf4-88f7-8c3595712616" (UID: "1a50ad9a-4d55-4bf4-88f7-8c3595712616"). InnerVolumeSpecName "kube-api-access-7h6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.679551 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a50ad9a-4d55-4bf4-88f7-8c3595712616" (UID: "1a50ad9a-4d55-4bf4-88f7-8c3595712616"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.727186 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.727221 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a50ad9a-4d55-4bf4-88f7-8c3595712616-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:39 crc kubenswrapper[4772]: I1122 10:52:39.727233 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h6lk\" (UniqueName: \"kubernetes.io/projected/1a50ad9a-4d55-4bf4-88f7-8c3595712616-kube-api-access-7h6lk\") on node \"crc\" DevicePath \"\"" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.155690 4772 generic.go:334] "Generic (PLEG): container finished" podID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerID="b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf" exitCode=0 Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.155958 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsgxf" event={"ID":"1a50ad9a-4d55-4bf4-88f7-8c3595712616","Type":"ContainerDied","Data":"b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf"} Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.156024 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsgxf" event={"ID":"1a50ad9a-4d55-4bf4-88f7-8c3595712616","Type":"ContainerDied","Data":"f2ed7126396e5e42c06d0c0e6d1293cdf4572ca59a1843ec31ead1f1d13d947e"} Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.155976 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsgxf" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.156082 4772 scope.go:117] "RemoveContainer" containerID="b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.170218 4772 scope.go:117] "RemoveContainer" containerID="d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.192315 4772 scope.go:117] "RemoveContainer" containerID="fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.201835 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsgxf"] Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.207461 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vsgxf"] Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.225532 4772 scope.go:117] "RemoveContainer" containerID="b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf" Nov 22 10:52:40 crc kubenswrapper[4772]: E1122 10:52:40.226082 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf\": container with ID starting with b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf not found: ID does not exist" containerID="b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.226147 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf"} err="failed to get container status \"b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf\": rpc error: code = NotFound desc = could not find container \"b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf\": container with ID starting with b442ada30a5de1f5ba3d097adb4b4e5a0f1bdb93f8fc0cff4c7e18d2af2bdfbf not found: ID does not exist" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.226170 4772 scope.go:117] "RemoveContainer" containerID="d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b" Nov 22 10:52:40 crc kubenswrapper[4772]: E1122 10:52:40.226551 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b\": container with ID starting with d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b not found: ID does not exist" containerID="d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.226601 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b"} err="failed to get container status \"d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b\": rpc error: code = NotFound desc = could not find container \"d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b\": container with ID starting with d8e0c7512ecd03d2a77348448c72e3abeba7f6dceaca66a22c2de30e48e1114b not found: ID does not exist" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.226635 4772 scope.go:117] "RemoveContainer" containerID="fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1" Nov 22 10:52:40 crc kubenswrapper[4772]: E1122 10:52:40.226986 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1\": container with ID starting with fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1 not found: ID does not exist" containerID="fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1" Nov 22 10:52:40 crc kubenswrapper[4772]: I1122 10:52:40.227011 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1"} err="failed to get container status \"fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1\": rpc error: code = NotFound desc = could not find container \"fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1\": container with ID starting with fabddf87e7eea777ef1134d75c187a242b89f0f96021f5c2729f474ea31b57a1 not found: ID does not exist" Nov 22 10:52:41 crc kubenswrapper[4772]: I1122 10:52:41.420418 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" path="/var/lib/kubelet/pods/1a50ad9a-4d55-4bf4-88f7-8c3595712616/volumes" Nov 22 10:52:47 crc kubenswrapper[4772]: I1122 10:52:47.909657 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5lsvv" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.227109 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d7x59"] Nov 22 10:52:58 crc kubenswrapper[4772]: E1122 10:52:58.227930 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="extract-content" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.227946 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="extract-content" Nov 22 10:52:58 crc kubenswrapper[4772]: E1122 10:52:58.227956 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="registry-server" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.227964 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="registry-server" Nov 22 10:52:58 crc kubenswrapper[4772]: E1122 10:52:58.227976 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="registry-server" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.227983 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="registry-server" Nov 22 10:52:58 crc kubenswrapper[4772]: E1122 10:52:58.227998 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="extract-utilities" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.228006 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="extract-utilities" Nov 22 10:52:58 crc kubenswrapper[4772]: E1122 10:52:58.228016 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="extract-utilities" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.228023 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="extract-utilities" Nov 22 10:52:58 crc kubenswrapper[4772]: E1122 10:52:58.228034 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="extract-content" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.228061 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="extract-content" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.228153 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a50ad9a-4d55-4bf4-88f7-8c3595712616" containerName="registry-server" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.228173 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6531269d-299f-4528-a40a-42976e6fc55e" containerName="registry-server" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.228909 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.235501 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7x59"] Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.371324 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctjgg\" (UniqueName: \"kubernetes.io/projected/a58e9988-5e1e-4c49-884c-6d91fa8670d0-kube-api-access-ctjgg\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.371390 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-catalog-content\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.371618 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-utilities\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.472357 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-catalog-content\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.472513 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-utilities\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.472548 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctjgg\" (UniqueName: \"kubernetes.io/projected/a58e9988-5e1e-4c49-884c-6d91fa8670d0-kube-api-access-ctjgg\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.472856 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-catalog-content\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.472956 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-utilities\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.492880 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctjgg\" (UniqueName: \"kubernetes.io/projected/a58e9988-5e1e-4c49-884c-6d91fa8670d0-kube-api-access-ctjgg\") pod \"redhat-marketplace-d7x59\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.559610 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:52:58 crc kubenswrapper[4772]: I1122 10:52:58.984511 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7x59"] Nov 22 10:52:58 crc kubenswrapper[4772]: W1122 10:52:58.992394 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda58e9988_5e1e_4c49_884c_6d91fa8670d0.slice/crio-d1d32f478714236e8cccde8942de849225b55666eefa8c12ab28d504b9bb7961 WatchSource:0}: Error finding container d1d32f478714236e8cccde8942de849225b55666eefa8c12ab28d504b9bb7961: Status 404 returned error can't find the container with id d1d32f478714236e8cccde8942de849225b55666eefa8c12ab28d504b9bb7961 Nov 22 10:52:59 crc kubenswrapper[4772]: I1122 10:52:59.287195 4772 generic.go:334] "Generic (PLEG): container finished" podID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerID="ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc" exitCode=0 Nov 22 10:52:59 crc kubenswrapper[4772]: I1122 10:52:59.287282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7x59" event={"ID":"a58e9988-5e1e-4c49-884c-6d91fa8670d0","Type":"ContainerDied","Data":"ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc"} Nov 22 10:52:59 crc kubenswrapper[4772]: I1122 10:52:59.287437 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7x59" event={"ID":"a58e9988-5e1e-4c49-884c-6d91fa8670d0","Type":"ContainerStarted","Data":"d1d32f478714236e8cccde8942de849225b55666eefa8c12ab28d504b9bb7961"} Nov 22 10:53:00 crc kubenswrapper[4772]: I1122 10:53:00.294071 4772 generic.go:334] "Generic (PLEG): container finished" podID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerID="7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed" exitCode=0 Nov 22 10:53:00 crc kubenswrapper[4772]: I1122 10:53:00.294262 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7x59" event={"ID":"a58e9988-5e1e-4c49-884c-6d91fa8670d0","Type":"ContainerDied","Data":"7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed"} Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.284821 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8"] Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.286183 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.289221 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.295670 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8"] Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.303135 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7x59" event={"ID":"a58e9988-5e1e-4c49-884c-6d91fa8670d0","Type":"ContainerStarted","Data":"7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a"} Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.343698 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d7x59" podStartSLOduration=1.93952111 podStartE2EDuration="3.343679776s" podCreationTimestamp="2025-11-22 10:52:58 +0000 UTC" firstStartedPulling="2025-11-22 10:52:59.289373975 +0000 UTC m=+899.528818469" lastFinishedPulling="2025-11-22 10:53:00.693532641 +0000 UTC m=+900.932977135" observedRunningTime="2025-11-22 10:53:01.34110736 +0000 UTC m=+901.580551854" watchObservedRunningTime="2025-11-22 10:53:01.343679776 +0000 UTC m=+901.583124270" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.408993 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.409136 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.409158 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nm4\" (UniqueName: \"kubernetes.io/projected/d34f677f-77a5-4855-a8c8-3d561fa25a34-kube-api-access-m5nm4\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.430814 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7m72x"] Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.435535 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.446932 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7m72x"] Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.510592 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-catalog-content\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.510664 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.510769 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-utilities\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.510850 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6gvf\" (UniqueName: \"kubernetes.io/projected/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-kube-api-access-l6gvf\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.510872 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.510897 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nm4\" (UniqueName: \"kubernetes.io/projected/d34f677f-77a5-4855-a8c8-3d561fa25a34-kube-api-access-m5nm4\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.511224 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.511476 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.535979 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nm4\" (UniqueName: \"kubernetes.io/projected/d34f677f-77a5-4855-a8c8-3d561fa25a34-kube-api-access-m5nm4\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.608138 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.613026 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6gvf\" (UniqueName: \"kubernetes.io/projected/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-kube-api-access-l6gvf\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.613114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-catalog-content\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.613179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-utilities\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.613747 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-utilities\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.613939 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-catalog-content\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.616238 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.643963 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6gvf\" (UniqueName: \"kubernetes.io/projected/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-kube-api-access-l6gvf\") pod \"certified-operators-7m72x\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.801182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:01 crc kubenswrapper[4772]: I1122 10:53:01.885602 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8"] Nov 22 10:53:01 crc kubenswrapper[4772]: W1122 10:53:01.920862 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd34f677f_77a5_4855_a8c8_3d561fa25a34.slice/crio-4e76f00cb61c49d91e3ac15d5ae2bab18c33ee4e76641a793b78d9f65c7b0478 WatchSource:0}: Error finding container 4e76f00cb61c49d91e3ac15d5ae2bab18c33ee4e76641a793b78d9f65c7b0478: Status 404 returned error can't find the container with id 4e76f00cb61c49d91e3ac15d5ae2bab18c33ee4e76641a793b78d9f65c7b0478 Nov 22 10:53:02 crc kubenswrapper[4772]: I1122 10:53:02.299438 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7m72x"] Nov 22 10:53:02 crc kubenswrapper[4772]: I1122 10:53:02.315021 4772 generic.go:334] "Generic (PLEG): container finished" podID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerID="c6630c9f215abc4535cfb9d5c3e0c76f514b8390cf91efb387026e6a441b0069" exitCode=0 Nov 22 10:53:02 crc kubenswrapper[4772]: I1122 10:53:02.315096 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" event={"ID":"d34f677f-77a5-4855-a8c8-3d561fa25a34","Type":"ContainerDied","Data":"c6630c9f215abc4535cfb9d5c3e0c76f514b8390cf91efb387026e6a441b0069"} Nov 22 10:53:02 crc kubenswrapper[4772]: I1122 10:53:02.315142 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" event={"ID":"d34f677f-77a5-4855-a8c8-3d561fa25a34","Type":"ContainerStarted","Data":"4e76f00cb61c49d91e3ac15d5ae2bab18c33ee4e76641a793b78d9f65c7b0478"} Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.267972 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ld9hg" podUID="614def41-0349-470c-afca-e5c335fa8834" containerName="console" containerID="cri-o://28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357" gracePeriod=15 Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.321746 4772 generic.go:334] "Generic (PLEG): container finished" podID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerID="cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea" exitCode=0 Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.321786 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m72x" event={"ID":"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6","Type":"ContainerDied","Data":"cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea"} Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.321810 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m72x" event={"ID":"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6","Type":"ContainerStarted","Data":"dcd7cd8411815508b9adb517def2568e02c8a1cc31124461a83bd3c9ed309500"} Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.742382 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ld9hg_614def41-0349-470c-afca-e5c335fa8834/console/0.log" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.742770 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.840097 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-console-config\") pod \"614def41-0349-470c-afca-e5c335fa8834\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.840207 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-serving-cert\") pod \"614def41-0349-470c-afca-e5c335fa8834\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.840257 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-service-ca\") pod \"614def41-0349-470c-afca-e5c335fa8834\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.840288 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-oauth-serving-cert\") pod \"614def41-0349-470c-afca-e5c335fa8834\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.840376 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-trusted-ca-bundle\") pod \"614def41-0349-470c-afca-e5c335fa8834\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.840429 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6d99\" (UniqueName: \"kubernetes.io/projected/614def41-0349-470c-afca-e5c335fa8834-kube-api-access-c6d99\") pod \"614def41-0349-470c-afca-e5c335fa8834\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.840470 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-oauth-config\") pod \"614def41-0349-470c-afca-e5c335fa8834\" (UID: \"614def41-0349-470c-afca-e5c335fa8834\") " Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.841557 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-service-ca" (OuterVolumeSpecName: "service-ca") pod "614def41-0349-470c-afca-e5c335fa8834" (UID: "614def41-0349-470c-afca-e5c335fa8834"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.841690 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-console-config" (OuterVolumeSpecName: "console-config") pod "614def41-0349-470c-afca-e5c335fa8834" (UID: "614def41-0349-470c-afca-e5c335fa8834"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.841720 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "614def41-0349-470c-afca-e5c335fa8834" (UID: "614def41-0349-470c-afca-e5c335fa8834"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.842303 4772 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.842332 4772 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.842349 4772 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.842512 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "614def41-0349-470c-afca-e5c335fa8834" (UID: "614def41-0349-470c-afca-e5c335fa8834"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.848153 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "614def41-0349-470c-afca-e5c335fa8834" (UID: "614def41-0349-470c-afca-e5c335fa8834"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.848634 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "614def41-0349-470c-afca-e5c335fa8834" (UID: "614def41-0349-470c-afca-e5c335fa8834"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.848672 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614def41-0349-470c-afca-e5c335fa8834-kube-api-access-c6d99" (OuterVolumeSpecName: "kube-api-access-c6d99") pod "614def41-0349-470c-afca-e5c335fa8834" (UID: "614def41-0349-470c-afca-e5c335fa8834"). InnerVolumeSpecName "kube-api-access-c6d99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.943691 4772 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.943741 4772 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/614def41-0349-470c-afca-e5c335fa8834-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.943759 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6d99\" (UniqueName: \"kubernetes.io/projected/614def41-0349-470c-afca-e5c335fa8834-kube-api-access-c6d99\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:03 crc kubenswrapper[4772]: I1122 10:53:03.943808 4772 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/614def41-0349-470c-afca-e5c335fa8834-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.327754 4772 generic.go:334] "Generic (PLEG): container finished" podID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerID="45125138c72ecee114a3f0cd509fd87654d63ff40b85a50857386c883bf3b5fc" exitCode=0 Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.327820 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" event={"ID":"d34f677f-77a5-4855-a8c8-3d561fa25a34","Type":"ContainerDied","Data":"45125138c72ecee114a3f0cd509fd87654d63ff40b85a50857386c883bf3b5fc"} Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.331968 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ld9hg_614def41-0349-470c-afca-e5c335fa8834/console/0.log" Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.332104 4772 generic.go:334] "Generic (PLEG): container finished" podID="614def41-0349-470c-afca-e5c335fa8834" containerID="28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357" exitCode=2 Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.332193 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ld9hg" event={"ID":"614def41-0349-470c-afca-e5c335fa8834","Type":"ContainerDied","Data":"28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357"} Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.332238 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ld9hg" event={"ID":"614def41-0349-470c-afca-e5c335fa8834","Type":"ContainerDied","Data":"5fbba05e20db75e660238020d921eefa08ce8357f6ac1f6adf53de501ff20b72"} Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.332244 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ld9hg" Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.332267 4772 scope.go:117] "RemoveContainer" containerID="28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357" Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.334910 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m72x" event={"ID":"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6","Type":"ContainerStarted","Data":"cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f"} Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.359507 4772 scope.go:117] "RemoveContainer" containerID="28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357" Nov 22 10:53:04 crc kubenswrapper[4772]: E1122 10:53:04.359911 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357\": container with ID starting with 28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357 not found: ID does not exist" containerID="28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357" Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.359969 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357"} err="failed to get container status \"28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357\": rpc error: code = NotFound desc = could not find container \"28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357\": container with ID starting with 28a8321b1018efbb9f2a7ff0557bf03068eb955c8842adf1621dceac92226357 not found: ID does not exist" Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.382190 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ld9hg"] Nov 22 10:53:04 crc kubenswrapper[4772]: I1122 10:53:04.385301 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ld9hg"] Nov 22 10:53:05 crc kubenswrapper[4772]: I1122 10:53:05.350030 4772 generic.go:334] "Generic (PLEG): container finished" podID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerID="67b2db5a716f613d36de865056b44968fc4cae70b162aa5d077355e9b2e2235c" exitCode=0 Nov 22 10:53:05 crc kubenswrapper[4772]: I1122 10:53:05.350155 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" event={"ID":"d34f677f-77a5-4855-a8c8-3d561fa25a34","Type":"ContainerDied","Data":"67b2db5a716f613d36de865056b44968fc4cae70b162aa5d077355e9b2e2235c"} Nov 22 10:53:05 crc kubenswrapper[4772]: I1122 10:53:05.355662 4772 generic.go:334] "Generic (PLEG): container finished" podID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerID="cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f" exitCode=0 Nov 22 10:53:05 crc kubenswrapper[4772]: I1122 10:53:05.355706 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m72x" event={"ID":"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6","Type":"ContainerDied","Data":"cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f"} Nov 22 10:53:05 crc kubenswrapper[4772]: I1122 10:53:05.424448 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614def41-0349-470c-afca-e5c335fa8834" path="/var/lib/kubelet/pods/614def41-0349-470c-afca-e5c335fa8834/volumes" Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.629308 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.686535 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-bundle\") pod \"d34f677f-77a5-4855-a8c8-3d561fa25a34\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.686619 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-util\") pod \"d34f677f-77a5-4855-a8c8-3d561fa25a34\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.686642 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5nm4\" (UniqueName: \"kubernetes.io/projected/d34f677f-77a5-4855-a8c8-3d561fa25a34-kube-api-access-m5nm4\") pod \"d34f677f-77a5-4855-a8c8-3d561fa25a34\" (UID: \"d34f677f-77a5-4855-a8c8-3d561fa25a34\") " Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.688892 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-bundle" (OuterVolumeSpecName: "bundle") pod "d34f677f-77a5-4855-a8c8-3d561fa25a34" (UID: "d34f677f-77a5-4855-a8c8-3d561fa25a34"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.694282 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34f677f-77a5-4855-a8c8-3d561fa25a34-kube-api-access-m5nm4" (OuterVolumeSpecName: "kube-api-access-m5nm4") pod "d34f677f-77a5-4855-a8c8-3d561fa25a34" (UID: "d34f677f-77a5-4855-a8c8-3d561fa25a34"). InnerVolumeSpecName "kube-api-access-m5nm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.700472 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-util" (OuterVolumeSpecName: "util") pod "d34f677f-77a5-4855-a8c8-3d561fa25a34" (UID: "d34f677f-77a5-4855-a8c8-3d561fa25a34"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.788519 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.788829 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d34f677f-77a5-4855-a8c8-3d561fa25a34-util\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:06 crc kubenswrapper[4772]: I1122 10:53:06.788841 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5nm4\" (UniqueName: \"kubernetes.io/projected/d34f677f-77a5-4855-a8c8-3d561fa25a34-kube-api-access-m5nm4\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:07 crc kubenswrapper[4772]: I1122 10:53:07.372547 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" event={"ID":"d34f677f-77a5-4855-a8c8-3d561fa25a34","Type":"ContainerDied","Data":"4e76f00cb61c49d91e3ac15d5ae2bab18c33ee4e76641a793b78d9f65c7b0478"} Nov 22 10:53:07 crc kubenswrapper[4772]: I1122 10:53:07.372595 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8" Nov 22 10:53:07 crc kubenswrapper[4772]: I1122 10:53:07.372609 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e76f00cb61c49d91e3ac15d5ae2bab18c33ee4e76641a793b78d9f65c7b0478" Nov 22 10:53:07 crc kubenswrapper[4772]: I1122 10:53:07.374647 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m72x" event={"ID":"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6","Type":"ContainerStarted","Data":"2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb"} Nov 22 10:53:07 crc kubenswrapper[4772]: I1122 10:53:07.395666 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7m72x" podStartSLOduration=3.284754241 podStartE2EDuration="6.395634949s" podCreationTimestamp="2025-11-22 10:53:01 +0000 UTC" firstStartedPulling="2025-11-22 10:53:03.375628364 +0000 UTC m=+903.615072858" lastFinishedPulling="2025-11-22 10:53:06.486509072 +0000 UTC m=+906.725953566" observedRunningTime="2025-11-22 10:53:07.392941251 +0000 UTC m=+907.632385755" watchObservedRunningTime="2025-11-22 10:53:07.395634949 +0000 UTC m=+907.635079443" Nov 22 10:53:08 crc kubenswrapper[4772]: I1122 10:53:08.560693 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:53:08 crc kubenswrapper[4772]: I1122 10:53:08.560782 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:53:08 crc kubenswrapper[4772]: I1122 10:53:08.609850 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:53:09 crc kubenswrapper[4772]: I1122 10:53:09.433302 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:53:11 crc kubenswrapper[4772]: I1122 10:53:11.803021 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:11 crc kubenswrapper[4772]: I1122 10:53:11.804352 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:11 crc kubenswrapper[4772]: I1122 10:53:11.846598 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:12 crc kubenswrapper[4772]: I1122 10:53:12.440121 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:14 crc kubenswrapper[4772]: I1122 10:53:14.618418 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7x59"] Nov 22 10:53:14 crc kubenswrapper[4772]: I1122 10:53:14.618774 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d7x59" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="registry-server" containerID="cri-o://7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a" gracePeriod=2 Nov 22 10:53:14 crc kubenswrapper[4772]: I1122 10:53:14.972720 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.097822 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-catalog-content\") pod \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.097874 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-utilities\") pod \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.097984 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctjgg\" (UniqueName: \"kubernetes.io/projected/a58e9988-5e1e-4c49-884c-6d91fa8670d0-kube-api-access-ctjgg\") pod \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\" (UID: \"a58e9988-5e1e-4c49-884c-6d91fa8670d0\") " Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.099030 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-utilities" (OuterVolumeSpecName: "utilities") pod "a58e9988-5e1e-4c49-884c-6d91fa8670d0" (UID: "a58e9988-5e1e-4c49-884c-6d91fa8670d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.103894 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58e9988-5e1e-4c49-884c-6d91fa8670d0-kube-api-access-ctjgg" (OuterVolumeSpecName: "kube-api-access-ctjgg") pod "a58e9988-5e1e-4c49-884c-6d91fa8670d0" (UID: "a58e9988-5e1e-4c49-884c-6d91fa8670d0"). InnerVolumeSpecName "kube-api-access-ctjgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.116780 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a58e9988-5e1e-4c49-884c-6d91fa8670d0" (UID: "a58e9988-5e1e-4c49-884c-6d91fa8670d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.198872 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.198907 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58e9988-5e1e-4c49-884c-6d91fa8670d0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.198916 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctjgg\" (UniqueName: \"kubernetes.io/projected/a58e9988-5e1e-4c49-884c-6d91fa8670d0-kube-api-access-ctjgg\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.418149 4772 generic.go:334] "Generic (PLEG): container finished" podID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerID="7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a" exitCode=0 Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.418236 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7x59" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.420426 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7x59" event={"ID":"a58e9988-5e1e-4c49-884c-6d91fa8670d0","Type":"ContainerDied","Data":"7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a"} Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.420562 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7m72x"] Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.420604 4772 scope.go:117] "RemoveContainer" containerID="7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.420640 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7x59" event={"ID":"a58e9988-5e1e-4c49-884c-6d91fa8670d0","Type":"ContainerDied","Data":"d1d32f478714236e8cccde8942de849225b55666eefa8c12ab28d504b9bb7961"} Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.421175 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7m72x" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="registry-server" containerID="cri-o://2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb" gracePeriod=2 Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.436418 4772 scope.go:117] "RemoveContainer" containerID="7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.443943 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7x59"] Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.455886 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7x59"] Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.476865 4772 scope.go:117] "RemoveContainer" containerID="ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.556975 4772 scope.go:117] "RemoveContainer" containerID="7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a" Nov 22 10:53:15 crc kubenswrapper[4772]: E1122 10:53:15.557464 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a\": container with ID starting with 7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a not found: ID does not exist" containerID="7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.557500 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a"} err="failed to get container status \"7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a\": rpc error: code = NotFound desc = could not find container \"7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a\": container with ID starting with 7677ef7387913207fd0ac69329c7ad3d32a3bdbebf28ab0ae1dd09561fc7df1a not found: ID does not exist" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.557523 4772 scope.go:117] "RemoveContainer" containerID="7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed" Nov 22 10:53:15 crc kubenswrapper[4772]: E1122 10:53:15.557771 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed\": container with ID starting with 7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed not found: ID does not exist" containerID="7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.557792 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed"} err="failed to get container status \"7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed\": rpc error: code = NotFound desc = could not find container \"7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed\": container with ID starting with 7919c2b9be6a4376835f66c6037dccbba1d9538993b4dd62b449cb2441af27ed not found: ID does not exist" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.557815 4772 scope.go:117] "RemoveContainer" containerID="ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc" Nov 22 10:53:15 crc kubenswrapper[4772]: E1122 10:53:15.558068 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc\": container with ID starting with ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc not found: ID does not exist" containerID="ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.558087 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc"} err="failed to get container status \"ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc\": rpc error: code = NotFound desc = could not find container \"ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc\": container with ID starting with ddfb03064ac1dfa2491dbbdd7907a3e98402102f4c43ff93671c50342048d1dc not found: ID does not exist" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.718771 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.806556 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-catalog-content\") pod \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.806622 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-utilities\") pod \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.806661 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6gvf\" (UniqueName: \"kubernetes.io/projected/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-kube-api-access-l6gvf\") pod \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\" (UID: \"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6\") " Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.807803 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-utilities" (OuterVolumeSpecName: "utilities") pod "f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" (UID: "f9e66e38-47f6-42ed-bed4-aa4d14d31ae6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.811873 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-kube-api-access-l6gvf" (OuterVolumeSpecName: "kube-api-access-l6gvf") pod "f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" (UID: "f9e66e38-47f6-42ed-bed4-aa4d14d31ae6"). InnerVolumeSpecName "kube-api-access-l6gvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.871218 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" (UID: "f9e66e38-47f6-42ed-bed4-aa4d14d31ae6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.907939 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.907975 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:15 crc kubenswrapper[4772]: I1122 10:53:15.908000 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6gvf\" (UniqueName: \"kubernetes.io/projected/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6-kube-api-access-l6gvf\") on node \"crc\" DevicePath \"\"" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.426024 4772 generic.go:334] "Generic (PLEG): container finished" podID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerID="2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb" exitCode=0 Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.426103 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m72x" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.426116 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m72x" event={"ID":"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6","Type":"ContainerDied","Data":"2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb"} Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.426164 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m72x" event={"ID":"f9e66e38-47f6-42ed-bed4-aa4d14d31ae6","Type":"ContainerDied","Data":"dcd7cd8411815508b9adb517def2568e02c8a1cc31124461a83bd3c9ed309500"} Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.426232 4772 scope.go:117] "RemoveContainer" containerID="2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.443853 4772 scope.go:117] "RemoveContainer" containerID="cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.450323 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7m72x"] Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.470900 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7m72x"] Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.477861 4772 scope.go:117] "RemoveContainer" containerID="cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.494303 4772 scope.go:117] "RemoveContainer" containerID="2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb" Nov 22 10:53:16 crc kubenswrapper[4772]: E1122 10:53:16.495169 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb\": container with ID starting with 2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb not found: ID does not exist" containerID="2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.495198 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb"} err="failed to get container status \"2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb\": rpc error: code = NotFound desc = could not find container \"2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb\": container with ID starting with 2f6da72c7958744452e53919df00abfaf14adf9044b69b1e9cab1fdb5faf8acb not found: ID does not exist" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.495225 4772 scope.go:117] "RemoveContainer" containerID="cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f" Nov 22 10:53:16 crc kubenswrapper[4772]: E1122 10:53:16.495401 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f\": container with ID starting with cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f not found: ID does not exist" containerID="cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.495421 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f"} err="failed to get container status \"cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f\": rpc error: code = NotFound desc = could not find container \"cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f\": container with ID starting with cbaf6f6dc099961d491ed73a7a8512b3d1775fad10b58fde6ef07dd76628466f not found: ID does not exist" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.495433 4772 scope.go:117] "RemoveContainer" containerID="cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea" Nov 22 10:53:16 crc kubenswrapper[4772]: E1122 10:53:16.495605 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea\": container with ID starting with cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea not found: ID does not exist" containerID="cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea" Nov 22 10:53:16 crc kubenswrapper[4772]: I1122 10:53:16.495624 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea"} err="failed to get container status \"cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea\": rpc error: code = NotFound desc = could not find container \"cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea\": container with ID starting with cb197ad73c41900b5306b78ff18c772aae3ea34dca49a5a494ad4c40b493fbea not found: ID does not exist" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.302333 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw"] Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304015 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="extract-content" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304129 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="extract-content" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304204 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="extract-content" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304259 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="extract-content" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304316 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="extract-utilities" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304383 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="extract-utilities" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304444 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerName="pull" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304500 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerName="pull" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304559 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerName="util" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304608 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerName="util" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304667 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614def41-0349-470c-afca-e5c335fa8834" containerName="console" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304717 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="614def41-0349-470c-afca-e5c335fa8834" containerName="console" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304769 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="extract-utilities" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304821 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="extract-utilities" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304873 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="registry-server" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.304928 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="registry-server" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.304982 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="registry-server" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.305030 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="registry-server" Nov 22 10:53:17 crc kubenswrapper[4772]: E1122 10:53:17.305101 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerName="extract" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.305270 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerName="extract" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.305445 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" containerName="registry-server" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.305508 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" containerName="registry-server" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.305565 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34f677f-77a5-4855-a8c8-3d561fa25a34" containerName="extract" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.305619 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="614def41-0349-470c-afca-e5c335fa8834" containerName="console" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.306289 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.311985 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.312541 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.312855 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ntsxl" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.313111 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.313253 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.328036 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw"] Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.422067 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58e9988-5e1e-4c49-884c-6d91fa8670d0" path="/var/lib/kubelet/pods/a58e9988-5e1e-4c49-884c-6d91fa8670d0/volumes" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.423230 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e66e38-47f6-42ed-bed4-aa4d14d31ae6" path="/var/lib/kubelet/pods/f9e66e38-47f6-42ed-bed4-aa4d14d31ae6/volumes" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.425459 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6738e49-fadc-4415-ad91-d6825e908eda-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.425577 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cnrv\" (UniqueName: \"kubernetes.io/projected/b6738e49-fadc-4415-ad91-d6825e908eda-kube-api-access-6cnrv\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.425688 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6738e49-fadc-4415-ad91-d6825e908eda-webhook-cert\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.527064 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cnrv\" (UniqueName: \"kubernetes.io/projected/b6738e49-fadc-4415-ad91-d6825e908eda-kube-api-access-6cnrv\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.527326 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6738e49-fadc-4415-ad91-d6825e908eda-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.527511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6738e49-fadc-4415-ad91-d6825e908eda-webhook-cert\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.533611 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6738e49-fadc-4415-ad91-d6825e908eda-webhook-cert\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.534085 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6738e49-fadc-4415-ad91-d6825e908eda-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.543885 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cnrv\" (UniqueName: \"kubernetes.io/projected/b6738e49-fadc-4415-ad91-d6825e908eda-kube-api-access-6cnrv\") pod \"metallb-operator-controller-manager-6c7dcf7d55-rgxxw\" (UID: \"b6738e49-fadc-4415-ad91-d6825e908eda\") " pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.630626 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.734871 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx"] Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.735558 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.743527 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.743964 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2rg5g" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.744165 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.765640 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx"] Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.934790 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw"] Nov 22 10:53:17 crc kubenswrapper[4772]: W1122 10:53:17.946187 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6738e49_fadc_4415_ad91_d6825e908eda.slice/crio-d9b44a95a86186c7d33a48f4402bd32bfcfe19313144cca061cfdb3144b496d2 WatchSource:0}: Error finding container d9b44a95a86186c7d33a48f4402bd32bfcfe19313144cca061cfdb3144b496d2: Status 404 returned error can't find the container with id d9b44a95a86186c7d33a48f4402bd32bfcfe19313144cca061cfdb3144b496d2 Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.958768 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92wv\" (UniqueName: \"kubernetes.io/projected/d3d9ca1d-f272-4a78-8da0-809487360415-kube-api-access-m92wv\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.958836 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3d9ca1d-f272-4a78-8da0-809487360415-apiservice-cert\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:17 crc kubenswrapper[4772]: I1122 10:53:17.958855 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3d9ca1d-f272-4a78-8da0-809487360415-webhook-cert\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.060632 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92wv\" (UniqueName: \"kubernetes.io/projected/d3d9ca1d-f272-4a78-8da0-809487360415-kube-api-access-m92wv\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.060738 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3d9ca1d-f272-4a78-8da0-809487360415-apiservice-cert\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.060761 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3d9ca1d-f272-4a78-8da0-809487360415-webhook-cert\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.066300 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3d9ca1d-f272-4a78-8da0-809487360415-apiservice-cert\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.066704 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3d9ca1d-f272-4a78-8da0-809487360415-webhook-cert\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.078410 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92wv\" (UniqueName: \"kubernetes.io/projected/d3d9ca1d-f272-4a78-8da0-809487360415-kube-api-access-m92wv\") pod \"metallb-operator-webhook-server-6bb4f57cb7-6rncx\" (UID: \"d3d9ca1d-f272-4a78-8da0-809487360415\") " pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.359136 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.443779 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" event={"ID":"b6738e49-fadc-4415-ad91-d6825e908eda","Type":"ContainerStarted","Data":"d9b44a95a86186c7d33a48f4402bd32bfcfe19313144cca061cfdb3144b496d2"} Nov 22 10:53:18 crc kubenswrapper[4772]: I1122 10:53:18.580522 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx"] Nov 22 10:53:18 crc kubenswrapper[4772]: W1122 10:53:18.588839 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3d9ca1d_f272_4a78_8da0_809487360415.slice/crio-5a2b49cfa6ede9fa8201f687400b2af7c215a6091b8258a03816406554a7307c WatchSource:0}: Error finding container 5a2b49cfa6ede9fa8201f687400b2af7c215a6091b8258a03816406554a7307c: Status 404 returned error can't find the container with id 5a2b49cfa6ede9fa8201f687400b2af7c215a6091b8258a03816406554a7307c Nov 22 10:53:19 crc kubenswrapper[4772]: I1122 10:53:19.450299 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" event={"ID":"d3d9ca1d-f272-4a78-8da0-809487360415","Type":"ContainerStarted","Data":"5a2b49cfa6ede9fa8201f687400b2af7c215a6091b8258a03816406554a7307c"} Nov 22 10:53:23 crc kubenswrapper[4772]: I1122 10:53:23.483472 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" event={"ID":"b6738e49-fadc-4415-ad91-d6825e908eda","Type":"ContainerStarted","Data":"f2fa968707f6d60c3f3937c4d8e6d07d59548aba9c354bf801229a90c7e449cb"} Nov 22 10:53:23 crc kubenswrapper[4772]: I1122 10:53:23.484667 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:23 crc kubenswrapper[4772]: I1122 10:53:23.486964 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" event={"ID":"d3d9ca1d-f272-4a78-8da0-809487360415","Type":"ContainerStarted","Data":"8006ca437a014deed466a195e8d3e5d5f168a9631d878ceadb42fccf4777b15b"} Nov 22 10:53:23 crc kubenswrapper[4772]: I1122 10:53:23.487489 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:23 crc kubenswrapper[4772]: I1122 10:53:23.520299 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" podStartSLOduration=1.757987972 podStartE2EDuration="6.520270327s" podCreationTimestamp="2025-11-22 10:53:17 +0000 UTC" firstStartedPulling="2025-11-22 10:53:17.949261159 +0000 UTC m=+918.188705653" lastFinishedPulling="2025-11-22 10:53:22.711543514 +0000 UTC m=+922.950988008" observedRunningTime="2025-11-22 10:53:23.510078777 +0000 UTC m=+923.749523301" watchObservedRunningTime="2025-11-22 10:53:23.520270327 +0000 UTC m=+923.759714821" Nov 22 10:53:23 crc kubenswrapper[4772]: I1122 10:53:23.568608 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" podStartSLOduration=2.434320155 podStartE2EDuration="6.568579136s" podCreationTimestamp="2025-11-22 10:53:17 +0000 UTC" firstStartedPulling="2025-11-22 10:53:18.59288597 +0000 UTC m=+918.832330464" lastFinishedPulling="2025-11-22 10:53:22.727144951 +0000 UTC m=+922.966589445" observedRunningTime="2025-11-22 10:53:23.564694638 +0000 UTC m=+923.804139132" watchObservedRunningTime="2025-11-22 10:53:23.568579136 +0000 UTC m=+923.808023630" Nov 22 10:53:31 crc kubenswrapper[4772]: I1122 10:53:31.540198 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:53:31 crc kubenswrapper[4772]: I1122 10:53:31.540748 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:53:38 crc kubenswrapper[4772]: I1122 10:53:38.363678 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6bb4f57cb7-6rncx" Nov 22 10:53:57 crc kubenswrapper[4772]: I1122 10:53:57.633175 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c7dcf7d55-rgxxw" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.348768 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-nx26w"] Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.349838 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.351603 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.351884 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-scxft" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.359453 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hj25m"] Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.362012 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.363795 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.363837 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.363970 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-nx26w"] Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.434468 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wf9qr"] Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.435495 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.438633 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.439520 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.439526 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.439687 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gc5cz" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.447872 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-s5jrv"] Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.448885 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.450373 4772 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.459838 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-s5jrv"] Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.513760 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ckc\" (UniqueName: \"kubernetes.io/projected/baf46c6b-a916-4f81-bfd6-447533e1fa95-kube-api-access-j6ckc\") pod \"frr-k8s-webhook-server-6998585d5-nx26w\" (UID: \"baf46c6b-a916-4f81-bfd6-447533e1fa95\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.513839 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0291a442-8b6c-4406-af22-55f24572ffe3-frr-startup\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.513864 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-reloader\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.513939 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0291a442-8b6c-4406-af22-55f24572ffe3-metrics-certs\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.514016 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-frr-conf\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.514073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-metrics\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.514235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-frr-sockets\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.514284 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf46c6b-a916-4f81-bfd6-447533e1fa95-cert\") pod \"frr-k8s-webhook-server-6998585d5-nx26w\" (UID: \"baf46c6b-a916-4f81-bfd6-447533e1fa95\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.514358 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnl4h\" (UniqueName: \"kubernetes.io/projected/0291a442-8b6c-4406-af22-55f24572ffe3-kube-api-access-rnl4h\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.615798 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-metrics-certs\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.615855 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-frr-sockets\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.615888 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf46c6b-a916-4f81-bfd6-447533e1fa95-cert\") pod \"frr-k8s-webhook-server-6998585d5-nx26w\" (UID: \"baf46c6b-a916-4f81-bfd6-447533e1fa95\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.615989 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmnj\" (UniqueName: \"kubernetes.io/projected/72c6d910-537c-4020-a0e0-a38fe68636ac-kube-api-access-slmnj\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616020 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnl4h\" (UniqueName: \"kubernetes.io/projected/0291a442-8b6c-4406-af22-55f24572ffe3-kube-api-access-rnl4h\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616414 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21aaa49e-824d-4f20-ad4d-aea95671788e-metrics-certs\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616496 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72c6d910-537c-4020-a0e0-a38fe68636ac-metallb-excludel2\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616330 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-frr-sockets\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616569 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616605 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6ckc\" (UniqueName: \"kubernetes.io/projected/baf46c6b-a916-4f81-bfd6-447533e1fa95-kube-api-access-j6ckc\") pod \"frr-k8s-webhook-server-6998585d5-nx26w\" (UID: \"baf46c6b-a916-4f81-bfd6-447533e1fa95\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616664 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0291a442-8b6c-4406-af22-55f24572ffe3-frr-startup\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616709 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-reloader\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-reloader\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.617774 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0291a442-8b6c-4406-af22-55f24572ffe3-frr-startup\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.616736 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21aaa49e-824d-4f20-ad4d-aea95671788e-cert\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.617947 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5tv5\" (UniqueName: \"kubernetes.io/projected/21aaa49e-824d-4f20-ad4d-aea95671788e-kube-api-access-d5tv5\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.617984 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0291a442-8b6c-4406-af22-55f24572ffe3-metrics-certs\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.618007 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-frr-conf\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.618030 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-metrics\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.618290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-frr-conf\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.618435 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0291a442-8b6c-4406-af22-55f24572ffe3-metrics\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.621453 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0291a442-8b6c-4406-af22-55f24572ffe3-metrics-certs\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.621472 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf46c6b-a916-4f81-bfd6-447533e1fa95-cert\") pod \"frr-k8s-webhook-server-6998585d5-nx26w\" (UID: \"baf46c6b-a916-4f81-bfd6-447533e1fa95\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.641821 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnl4h\" (UniqueName: \"kubernetes.io/projected/0291a442-8b6c-4406-af22-55f24572ffe3-kube-api-access-rnl4h\") pod \"frr-k8s-hj25m\" (UID: \"0291a442-8b6c-4406-af22-55f24572ffe3\") " pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.646569 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6ckc\" (UniqueName: \"kubernetes.io/projected/baf46c6b-a916-4f81-bfd6-447533e1fa95-kube-api-access-j6ckc\") pod \"frr-k8s-webhook-server-6998585d5-nx26w\" (UID: \"baf46c6b-a916-4f81-bfd6-447533e1fa95\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.668492 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.695878 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hj25m" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.718756 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21aaa49e-824d-4f20-ad4d-aea95671788e-cert\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.719038 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5tv5\" (UniqueName: \"kubernetes.io/projected/21aaa49e-824d-4f20-ad4d-aea95671788e-kube-api-access-d5tv5\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.719135 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-metrics-certs\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.719176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slmnj\" (UniqueName: \"kubernetes.io/projected/72c6d910-537c-4020-a0e0-a38fe68636ac-kube-api-access-slmnj\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.719225 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21aaa49e-824d-4f20-ad4d-aea95671788e-metrics-certs\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.719251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72c6d910-537c-4020-a0e0-a38fe68636ac-metallb-excludel2\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.719273 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: E1122 10:53:58.719381 4772 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 10:53:58 crc kubenswrapper[4772]: E1122 10:53:58.719431 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist podName:72c6d910-537c-4020-a0e0-a38fe68636ac nodeName:}" failed. No retries permitted until 2025-11-22 10:53:59.219412765 +0000 UTC m=+959.458857259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist") pod "speaker-wf9qr" (UID: "72c6d910-537c-4020-a0e0-a38fe68636ac") : secret "metallb-memberlist" not found Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.720438 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72c6d910-537c-4020-a0e0-a38fe68636ac-metallb-excludel2\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.722837 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-metrics-certs\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.723727 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21aaa49e-824d-4f20-ad4d-aea95671788e-metrics-certs\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.723841 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21aaa49e-824d-4f20-ad4d-aea95671788e-cert\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.734975 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5tv5\" (UniqueName: \"kubernetes.io/projected/21aaa49e-824d-4f20-ad4d-aea95671788e-kube-api-access-d5tv5\") pod \"controller-6c7b4b5f48-s5jrv\" (UID: \"21aaa49e-824d-4f20-ad4d-aea95671788e\") " pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.735374 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slmnj\" (UniqueName: \"kubernetes.io/projected/72c6d910-537c-4020-a0e0-a38fe68636ac-kube-api-access-slmnj\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.761114 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:58 crc kubenswrapper[4772]: I1122 10:53:58.924562 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-nx26w"] Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.232021 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:53:59 crc kubenswrapper[4772]: E1122 10:53:59.232225 4772 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 10:53:59 crc kubenswrapper[4772]: E1122 10:53:59.232448 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist podName:72c6d910-537c-4020-a0e0-a38fe68636ac nodeName:}" failed. No retries permitted until 2025-11-22 10:54:00.232432252 +0000 UTC m=+960.471876746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist") pod "speaker-wf9qr" (UID: "72c6d910-537c-4020-a0e0-a38fe68636ac") : secret "metallb-memberlist" not found Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.274062 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-s5jrv"] Nov 22 10:53:59 crc kubenswrapper[4772]: W1122 10:53:59.282713 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21aaa49e_824d_4f20_ad4d_aea95671788e.slice/crio-1a811d3bb72e3e007f4805e8db98ac44affa3a8d44ae8522e6f25332fffdc9ee WatchSource:0}: Error finding container 1a811d3bb72e3e007f4805e8db98ac44affa3a8d44ae8522e6f25332fffdc9ee: Status 404 returned error can't find the container with id 1a811d3bb72e3e007f4805e8db98ac44affa3a8d44ae8522e6f25332fffdc9ee Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.682426 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-s5jrv" event={"ID":"21aaa49e-824d-4f20-ad4d-aea95671788e","Type":"ContainerStarted","Data":"27948e5080a4ebb3a73ad9403ef7a91b63e999095aa2311d918732110ab27cce"} Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.682481 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-s5jrv" event={"ID":"21aaa49e-824d-4f20-ad4d-aea95671788e","Type":"ContainerStarted","Data":"038b6c5dc044ebbd40bb17d2c4b3a93be30f0417d6fdcb34fe159f4327c64311"} Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.682497 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-s5jrv" event={"ID":"21aaa49e-824d-4f20-ad4d-aea95671788e","Type":"ContainerStarted","Data":"1a811d3bb72e3e007f4805e8db98ac44affa3a8d44ae8522e6f25332fffdc9ee"} Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.682638 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.683668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" event={"ID":"baf46c6b-a916-4f81-bfd6-447533e1fa95","Type":"ContainerStarted","Data":"4e0f340ccff258d37360184770a4ac3060a5d6e8e5022589f01d9d8787cfb04d"} Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.685923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerStarted","Data":"037dd1c2328b33dc79f087294de329044864299ecaebb6f1385575c3980085c9"} Nov 22 10:53:59 crc kubenswrapper[4772]: I1122 10:53:59.703840 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-s5jrv" podStartSLOduration=1.703758568 podStartE2EDuration="1.703758568s" podCreationTimestamp="2025-11-22 10:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:53:59.699364746 +0000 UTC m=+959.938809240" watchObservedRunningTime="2025-11-22 10:53:59.703758568 +0000 UTC m=+959.943203062" Nov 22 10:54:00 crc kubenswrapper[4772]: I1122 10:54:00.250502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:54:00 crc kubenswrapper[4772]: I1122 10:54:00.261912 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72c6d910-537c-4020-a0e0-a38fe68636ac-memberlist\") pod \"speaker-wf9qr\" (UID: \"72c6d910-537c-4020-a0e0-a38fe68636ac\") " pod="metallb-system/speaker-wf9qr" Nov 22 10:54:00 crc kubenswrapper[4772]: I1122 10:54:00.547630 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wf9qr" Nov 22 10:54:00 crc kubenswrapper[4772]: I1122 10:54:00.702425 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wf9qr" event={"ID":"72c6d910-537c-4020-a0e0-a38fe68636ac","Type":"ContainerStarted","Data":"b45643699040f8199a8f955b041718df05e60cc8e96512b23f4101642e74bb90"} Nov 22 10:54:01 crc kubenswrapper[4772]: I1122 10:54:01.533314 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:54:01 crc kubenswrapper[4772]: I1122 10:54:01.533383 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:54:01 crc kubenswrapper[4772]: I1122 10:54:01.709995 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wf9qr" event={"ID":"72c6d910-537c-4020-a0e0-a38fe68636ac","Type":"ContainerStarted","Data":"0fcaa876c3107d53c5919dace953bf05e213b26e12ca6466daf8eed7e210574b"} Nov 22 10:54:01 crc kubenswrapper[4772]: I1122 10:54:01.710035 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wf9qr" event={"ID":"72c6d910-537c-4020-a0e0-a38fe68636ac","Type":"ContainerStarted","Data":"b40e0cf39a9fcb89c29150a0fd31dc01c965fbc329c2af96f8b916f932118749"} Nov 22 10:54:01 crc kubenswrapper[4772]: I1122 10:54:01.710795 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wf9qr" Nov 22 10:54:01 crc kubenswrapper[4772]: I1122 10:54:01.735685 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wf9qr" podStartSLOduration=3.735669171 podStartE2EDuration="3.735669171s" podCreationTimestamp="2025-11-22 10:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:54:01.733417704 +0000 UTC m=+961.972862218" watchObservedRunningTime="2025-11-22 10:54:01.735669171 +0000 UTC m=+961.975113665" Nov 22 10:54:05 crc kubenswrapper[4772]: I1122 10:54:05.735013 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" event={"ID":"baf46c6b-a916-4f81-bfd6-447533e1fa95","Type":"ContainerStarted","Data":"58b5d2afb9a3233d700473ef444a6b100568200d2d08ba73ed8734b3adb0f9a7"} Nov 22 10:54:05 crc kubenswrapper[4772]: I1122 10:54:05.735657 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:54:05 crc kubenswrapper[4772]: I1122 10:54:05.737194 4772 generic.go:334] "Generic (PLEG): container finished" podID="0291a442-8b6c-4406-af22-55f24572ffe3" containerID="453b179734b3caf397cbe36ff8917b219effd310b17d6a4d58562b0cc0d3e73d" exitCode=0 Nov 22 10:54:05 crc kubenswrapper[4772]: I1122 10:54:05.737235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerDied","Data":"453b179734b3caf397cbe36ff8917b219effd310b17d6a4d58562b0cc0d3e73d"} Nov 22 10:54:05 crc kubenswrapper[4772]: I1122 10:54:05.750149 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" podStartSLOduration=1.570808634 podStartE2EDuration="7.750130593s" podCreationTimestamp="2025-11-22 10:53:58 +0000 UTC" firstStartedPulling="2025-11-22 10:53:58.935014352 +0000 UTC m=+959.174458846" lastFinishedPulling="2025-11-22 10:54:05.114336311 +0000 UTC m=+965.353780805" observedRunningTime="2025-11-22 10:54:05.747601178 +0000 UTC m=+965.987045672" watchObservedRunningTime="2025-11-22 10:54:05.750130593 +0000 UTC m=+965.989575107" Nov 22 10:54:06 crc kubenswrapper[4772]: I1122 10:54:06.744517 4772 generic.go:334] "Generic (PLEG): container finished" podID="0291a442-8b6c-4406-af22-55f24572ffe3" containerID="4309b494d771857abe9ab2b37bf685fc54e5e2eb83f9d695756aa240b04a7b36" exitCode=0 Nov 22 10:54:06 crc kubenswrapper[4772]: I1122 10:54:06.744593 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerDied","Data":"4309b494d771857abe9ab2b37bf685fc54e5e2eb83f9d695756aa240b04a7b36"} Nov 22 10:54:07 crc kubenswrapper[4772]: I1122 10:54:07.753197 4772 generic.go:334] "Generic (PLEG): container finished" podID="0291a442-8b6c-4406-af22-55f24572ffe3" containerID="42bc6c1de868bcee2c228f85b19dcc7d116a55cf6d59050e1f5fcfa38ef8fab9" exitCode=0 Nov 22 10:54:07 crc kubenswrapper[4772]: I1122 10:54:07.753303 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerDied","Data":"42bc6c1de868bcee2c228f85b19dcc7d116a55cf6d59050e1f5fcfa38ef8fab9"} Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.772752 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerStarted","Data":"f83f00a40e77e063dd3467a000be495a3234b6f187f6017fd17f7d28ad61435b"} Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.773178 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hj25m" Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.773192 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerStarted","Data":"ff726b139846e8a9c5bfbe98c4973597c0b3dc0f61f535b9ea1151b119c0b907"} Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.773203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerStarted","Data":"b601af4d50e59ba3576e8d8b961bbb94f1a1e7aabcab4a1a52d3947014700185"} Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.773212 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerStarted","Data":"c291103881099e8dd87fe6eef7b7d1d5cc9dfecb3a3e80aefb611eb0030242cc"} Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.773221 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerStarted","Data":"0231b0c85e8fa335e6ad4398e0a8b8c99b129589b527c8373521371fa2b20715"} Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.773228 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hj25m" event={"ID":"0291a442-8b6c-4406-af22-55f24572ffe3","Type":"ContainerStarted","Data":"d7ca5135e0f82388889c0e0d06c88be1447ba317997ed336ecc49c95525d8a48"} Nov 22 10:54:08 crc kubenswrapper[4772]: I1122 10:54:08.811515 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hj25m" podStartSLOduration=4.583322115 podStartE2EDuration="10.811473356s" podCreationTimestamp="2025-11-22 10:53:58 +0000 UTC" firstStartedPulling="2025-11-22 10:53:58.906755363 +0000 UTC m=+959.146199847" lastFinishedPulling="2025-11-22 10:54:05.134906594 +0000 UTC m=+965.374351088" observedRunningTime="2025-11-22 10:54:08.79865279 +0000 UTC m=+969.038097294" watchObservedRunningTime="2025-11-22 10:54:08.811473356 +0000 UTC m=+969.050917850" Nov 22 10:54:10 crc kubenswrapper[4772]: I1122 10:54:10.551232 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wf9qr" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.283705 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk"] Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.285674 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.288222 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.303992 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk"] Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.448502 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58ft\" (UniqueName: \"kubernetes.io/projected/55a36e51-acb5-44e1-9394-a1280c867770-kube-api-access-q58ft\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.448559 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.448600 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.549864 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q58ft\" (UniqueName: \"kubernetes.io/projected/55a36e51-acb5-44e1-9394-a1280c867770-kube-api-access-q58ft\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.549927 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.549959 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.550519 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.550536 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.567647 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q58ft\" (UniqueName: \"kubernetes.io/projected/55a36e51-acb5-44e1-9394-a1280c867770-kube-api-access-q58ft\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:12 crc kubenswrapper[4772]: I1122 10:54:12.635411 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:13 crc kubenswrapper[4772]: I1122 10:54:13.018653 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk"] Nov 22 10:54:13 crc kubenswrapper[4772]: W1122 10:54:13.025235 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a36e51_acb5_44e1_9394_a1280c867770.slice/crio-500ce1f76988d14682c8524a23a5ea574e9ddcb209ce9ec58b7e7c1bf5aa0ead WatchSource:0}: Error finding container 500ce1f76988d14682c8524a23a5ea574e9ddcb209ce9ec58b7e7c1bf5aa0ead: Status 404 returned error can't find the container with id 500ce1f76988d14682c8524a23a5ea574e9ddcb209ce9ec58b7e7c1bf5aa0ead Nov 22 10:54:13 crc kubenswrapper[4772]: I1122 10:54:13.696900 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hj25m" Nov 22 10:54:13 crc kubenswrapper[4772]: I1122 10:54:13.735531 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hj25m" Nov 22 10:54:13 crc kubenswrapper[4772]: I1122 10:54:13.802182 4772 generic.go:334] "Generic (PLEG): container finished" podID="55a36e51-acb5-44e1-9394-a1280c867770" containerID="d3e864814c0754b908b41036c7bd00ed62ce70700b33011df928fb953999eadb" exitCode=0 Nov 22 10:54:13 crc kubenswrapper[4772]: I1122 10:54:13.802423 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" event={"ID":"55a36e51-acb5-44e1-9394-a1280c867770","Type":"ContainerDied","Data":"d3e864814c0754b908b41036c7bd00ed62ce70700b33011df928fb953999eadb"} Nov 22 10:54:13 crc kubenswrapper[4772]: I1122 10:54:13.802454 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" event={"ID":"55a36e51-acb5-44e1-9394-a1280c867770","Type":"ContainerStarted","Data":"500ce1f76988d14682c8524a23a5ea574e9ddcb209ce9ec58b7e7c1bf5aa0ead"} Nov 22 10:54:17 crc kubenswrapper[4772]: I1122 10:54:17.829793 4772 generic.go:334] "Generic (PLEG): container finished" podID="55a36e51-acb5-44e1-9394-a1280c867770" containerID="1e6179d952bfef3cad02731f18a5c0c2666cb8d2c1f3ad53bdd9c2ddf92c73f4" exitCode=0 Nov 22 10:54:17 crc kubenswrapper[4772]: I1122 10:54:17.829836 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" event={"ID":"55a36e51-acb5-44e1-9394-a1280c867770","Type":"ContainerDied","Data":"1e6179d952bfef3cad02731f18a5c0c2666cb8d2c1f3ad53bdd9c2ddf92c73f4"} Nov 22 10:54:18 crc kubenswrapper[4772]: I1122 10:54:18.674344 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-nx26w" Nov 22 10:54:18 crc kubenswrapper[4772]: I1122 10:54:18.700212 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hj25m" Nov 22 10:54:18 crc kubenswrapper[4772]: I1122 10:54:18.764704 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-s5jrv" Nov 22 10:54:18 crc kubenswrapper[4772]: I1122 10:54:18.837888 4772 generic.go:334] "Generic (PLEG): container finished" podID="55a36e51-acb5-44e1-9394-a1280c867770" containerID="9aac4d70ab28a153befc022b4cbbac5414ba60e8022338f9f135112f37fdd526" exitCode=0 Nov 22 10:54:18 crc kubenswrapper[4772]: I1122 10:54:18.837982 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" event={"ID":"55a36e51-acb5-44e1-9394-a1280c867770","Type":"ContainerDied","Data":"9aac4d70ab28a153befc022b4cbbac5414ba60e8022338f9f135112f37fdd526"} Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.165432 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.357921 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-util\") pod \"55a36e51-acb5-44e1-9394-a1280c867770\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.358312 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-bundle\") pod \"55a36e51-acb5-44e1-9394-a1280c867770\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.358545 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q58ft\" (UniqueName: \"kubernetes.io/projected/55a36e51-acb5-44e1-9394-a1280c867770-kube-api-access-q58ft\") pod \"55a36e51-acb5-44e1-9394-a1280c867770\" (UID: \"55a36e51-acb5-44e1-9394-a1280c867770\") " Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.359506 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-bundle" (OuterVolumeSpecName: "bundle") pod "55a36e51-acb5-44e1-9394-a1280c867770" (UID: "55a36e51-acb5-44e1-9394-a1280c867770"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.364736 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a36e51-acb5-44e1-9394-a1280c867770-kube-api-access-q58ft" (OuterVolumeSpecName: "kube-api-access-q58ft") pod "55a36e51-acb5-44e1-9394-a1280c867770" (UID: "55a36e51-acb5-44e1-9394-a1280c867770"). InnerVolumeSpecName "kube-api-access-q58ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.368418 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-util" (OuterVolumeSpecName: "util") pod "55a36e51-acb5-44e1-9394-a1280c867770" (UID: "55a36e51-acb5-44e1-9394-a1280c867770"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.460037 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q58ft\" (UniqueName: \"kubernetes.io/projected/55a36e51-acb5-44e1-9394-a1280c867770-kube-api-access-q58ft\") on node \"crc\" DevicePath \"\"" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.460108 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-util\") on node \"crc\" DevicePath \"\"" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.460120 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a36e51-acb5-44e1-9394-a1280c867770-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.853323 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" event={"ID":"55a36e51-acb5-44e1-9394-a1280c867770","Type":"ContainerDied","Data":"500ce1f76988d14682c8524a23a5ea574e9ddcb209ce9ec58b7e7c1bf5aa0ead"} Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.853416 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk" Nov 22 10:54:20 crc kubenswrapper[4772]: I1122 10:54:20.853417 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="500ce1f76988d14682c8524a23a5ea574e9ddcb209ce9ec58b7e7c1bf5aa0ead" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.895828 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54"] Nov 22 10:54:25 crc kubenswrapper[4772]: E1122 10:54:25.896618 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a36e51-acb5-44e1-9394-a1280c867770" containerName="util" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.896634 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a36e51-acb5-44e1-9394-a1280c867770" containerName="util" Nov 22 10:54:25 crc kubenswrapper[4772]: E1122 10:54:25.896648 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a36e51-acb5-44e1-9394-a1280c867770" containerName="pull" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.896655 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a36e51-acb5-44e1-9394-a1280c867770" containerName="pull" Nov 22 10:54:25 crc kubenswrapper[4772]: E1122 10:54:25.896668 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a36e51-acb5-44e1-9394-a1280c867770" containerName="extract" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.896674 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a36e51-acb5-44e1-9394-a1280c867770" containerName="extract" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.896775 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a36e51-acb5-44e1-9394-a1280c867770" containerName="extract" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.897205 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.900121 4772 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-7dlls" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.900831 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.909015 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.914409 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54"] Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.934374 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d4e568d-a191-4225-b448-b0c9a0f7ca73-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qhc54\" (UID: \"8d4e568d-a191-4225-b448-b0c9a0f7ca73\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:25 crc kubenswrapper[4772]: I1122 10:54:25.934424 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prlwj\" (UniqueName: \"kubernetes.io/projected/8d4e568d-a191-4225-b448-b0c9a0f7ca73-kube-api-access-prlwj\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qhc54\" (UID: \"8d4e568d-a191-4225-b448-b0c9a0f7ca73\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:26 crc kubenswrapper[4772]: I1122 10:54:26.035159 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d4e568d-a191-4225-b448-b0c9a0f7ca73-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qhc54\" (UID: \"8d4e568d-a191-4225-b448-b0c9a0f7ca73\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:26 crc kubenswrapper[4772]: I1122 10:54:26.035222 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prlwj\" (UniqueName: \"kubernetes.io/projected/8d4e568d-a191-4225-b448-b0c9a0f7ca73-kube-api-access-prlwj\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qhc54\" (UID: \"8d4e568d-a191-4225-b448-b0c9a0f7ca73\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:26 crc kubenswrapper[4772]: I1122 10:54:26.035666 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d4e568d-a191-4225-b448-b0c9a0f7ca73-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qhc54\" (UID: \"8d4e568d-a191-4225-b448-b0c9a0f7ca73\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:26 crc kubenswrapper[4772]: I1122 10:54:26.052804 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prlwj\" (UniqueName: \"kubernetes.io/projected/8d4e568d-a191-4225-b448-b0c9a0f7ca73-kube-api-access-prlwj\") pod \"cert-manager-operator-controller-manager-64cf6dff88-qhc54\" (UID: \"8d4e568d-a191-4225-b448-b0c9a0f7ca73\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:26 crc kubenswrapper[4772]: I1122 10:54:26.211515 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" Nov 22 10:54:26 crc kubenswrapper[4772]: I1122 10:54:26.505845 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54"] Nov 22 10:54:26 crc kubenswrapper[4772]: W1122 10:54:26.519154 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d4e568d_a191_4225_b448_b0c9a0f7ca73.slice/crio-a8e64c4a113503f2184796a3d72cbe9cd22d6316b05ab24f39b17586d23b96ca WatchSource:0}: Error finding container a8e64c4a113503f2184796a3d72cbe9cd22d6316b05ab24f39b17586d23b96ca: Status 404 returned error can't find the container with id a8e64c4a113503f2184796a3d72cbe9cd22d6316b05ab24f39b17586d23b96ca Nov 22 10:54:26 crc kubenswrapper[4772]: I1122 10:54:26.884755 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" event={"ID":"8d4e568d-a191-4225-b448-b0c9a0f7ca73","Type":"ContainerStarted","Data":"a8e64c4a113503f2184796a3d72cbe9cd22d6316b05ab24f39b17586d23b96ca"} Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.533573 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.534182 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.534234 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.534880 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ed5ce78086a642e7415af1fd3d7071bae3e10a61431b8613ee77406e828d8f3"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.534931 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://6ed5ce78086a642e7415af1fd3d7071bae3e10a61431b8613ee77406e828d8f3" gracePeriod=600 Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.918896 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="6ed5ce78086a642e7415af1fd3d7071bae3e10a61431b8613ee77406e828d8f3" exitCode=0 Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.918977 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"6ed5ce78086a642e7415af1fd3d7071bae3e10a61431b8613ee77406e828d8f3"} Nov 22 10:54:31 crc kubenswrapper[4772]: I1122 10:54:31.919030 4772 scope.go:117] "RemoveContainer" containerID="1bc014ba5a352c64bb1f584ebb6c9325985805800f901b3f27190486054a5e50" Nov 22 10:54:33 crc kubenswrapper[4772]: I1122 10:54:33.939690 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"a69ae46ae795f0c272467d20a88d4d3efbdd2e5ec86370c20bc8c57f8ee1677e"} Nov 22 10:54:33 crc kubenswrapper[4772]: I1122 10:54:33.942648 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" event={"ID":"8d4e568d-a191-4225-b448-b0c9a0f7ca73","Type":"ContainerStarted","Data":"2906b7db220376b2d6fec303d817ddad7b1b3eb17c3203a7ece67c678814802a"} Nov 22 10:54:33 crc kubenswrapper[4772]: I1122 10:54:33.985622 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-qhc54" podStartSLOduration=2.123371859 podStartE2EDuration="8.985600828s" podCreationTimestamp="2025-11-22 10:54:25 +0000 UTC" firstStartedPulling="2025-11-22 10:54:26.529167776 +0000 UTC m=+986.768612270" lastFinishedPulling="2025-11-22 10:54:33.391396745 +0000 UTC m=+993.630841239" observedRunningTime="2025-11-22 10:54:33.983000212 +0000 UTC m=+994.222444706" watchObservedRunningTime="2025-11-22 10:54:33.985600828 +0000 UTC m=+994.225045322" Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.816999 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-dzcx9"] Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.818442 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.820572 4772 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rb4gc" Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.822764 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.827459 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-dzcx9"] Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.827667 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.995338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/feafcc29-726d-4953-ba51-85da91d3cc77-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-dzcx9\" (UID: \"feafcc29-726d-4953-ba51-85da91d3cc77\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:36 crc kubenswrapper[4772]: I1122 10:54:36.995717 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtz2p\" (UniqueName: \"kubernetes.io/projected/feafcc29-726d-4953-ba51-85da91d3cc77-kube-api-access-gtz2p\") pod \"cert-manager-webhook-f4fb5df64-dzcx9\" (UID: \"feafcc29-726d-4953-ba51-85da91d3cc77\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:37 crc kubenswrapper[4772]: I1122 10:54:37.097099 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/feafcc29-726d-4953-ba51-85da91d3cc77-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-dzcx9\" (UID: \"feafcc29-726d-4953-ba51-85da91d3cc77\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:37 crc kubenswrapper[4772]: I1122 10:54:37.097182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtz2p\" (UniqueName: \"kubernetes.io/projected/feafcc29-726d-4953-ba51-85da91d3cc77-kube-api-access-gtz2p\") pod \"cert-manager-webhook-f4fb5df64-dzcx9\" (UID: \"feafcc29-726d-4953-ba51-85da91d3cc77\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:37 crc kubenswrapper[4772]: I1122 10:54:37.116953 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/feafcc29-726d-4953-ba51-85da91d3cc77-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-dzcx9\" (UID: \"feafcc29-726d-4953-ba51-85da91d3cc77\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:37 crc kubenswrapper[4772]: I1122 10:54:37.117022 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtz2p\" (UniqueName: \"kubernetes.io/projected/feafcc29-726d-4953-ba51-85da91d3cc77-kube-api-access-gtz2p\") pod \"cert-manager-webhook-f4fb5df64-dzcx9\" (UID: \"feafcc29-726d-4953-ba51-85da91d3cc77\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:37 crc kubenswrapper[4772]: I1122 10:54:37.140659 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:37 crc kubenswrapper[4772]: I1122 10:54:37.513404 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-dzcx9"] Nov 22 10:54:37 crc kubenswrapper[4772]: I1122 10:54:37.963553 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" event={"ID":"feafcc29-726d-4953-ba51-85da91d3cc77","Type":"ContainerStarted","Data":"b4b7763066db32bbbeeac9baebbf21f8e51b55c2aa51b8fedf792699f117cd8d"} Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.403537 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt"] Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.404533 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.409126 4772 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-d5l7d" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.410076 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt"] Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.572135 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b18c846-e06c-4518-af8b-3e279bae816c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-h2qvt\" (UID: \"9b18c846-e06c-4518-af8b-3e279bae816c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.572361 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4dlm\" (UniqueName: \"kubernetes.io/projected/9b18c846-e06c-4518-af8b-3e279bae816c-kube-api-access-x4dlm\") pod \"cert-manager-cainjector-855d9ccff4-h2qvt\" (UID: \"9b18c846-e06c-4518-af8b-3e279bae816c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.674038 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4dlm\" (UniqueName: \"kubernetes.io/projected/9b18c846-e06c-4518-af8b-3e279bae816c-kube-api-access-x4dlm\") pod \"cert-manager-cainjector-855d9ccff4-h2qvt\" (UID: \"9b18c846-e06c-4518-af8b-3e279bae816c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.674115 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b18c846-e06c-4518-af8b-3e279bae816c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-h2qvt\" (UID: \"9b18c846-e06c-4518-af8b-3e279bae816c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.695770 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4dlm\" (UniqueName: \"kubernetes.io/projected/9b18c846-e06c-4518-af8b-3e279bae816c-kube-api-access-x4dlm\") pod \"cert-manager-cainjector-855d9ccff4-h2qvt\" (UID: \"9b18c846-e06c-4518-af8b-3e279bae816c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.711091 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b18c846-e06c-4518-af8b-3e279bae816c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-h2qvt\" (UID: \"9b18c846-e06c-4518-af8b-3e279bae816c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:40 crc kubenswrapper[4772]: I1122 10:54:40.740193 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" Nov 22 10:54:44 crc kubenswrapper[4772]: I1122 10:54:44.553101 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt"] Nov 22 10:54:44 crc kubenswrapper[4772]: W1122 10:54:44.556577 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b18c846_e06c_4518_af8b_3e279bae816c.slice/crio-b0930b92277cdbba488b0485c759bac48d0f6c5bca952e1413454b9245a33a25 WatchSource:0}: Error finding container b0930b92277cdbba488b0485c759bac48d0f6c5bca952e1413454b9245a33a25: Status 404 returned error can't find the container with id b0930b92277cdbba488b0485c759bac48d0f6c5bca952e1413454b9245a33a25 Nov 22 10:54:45 crc kubenswrapper[4772]: I1122 10:54:45.004798 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" event={"ID":"feafcc29-726d-4953-ba51-85da91d3cc77","Type":"ContainerStarted","Data":"481760298ad8b7582d9a6bf7f79a2884495c67fd0cdd0faeea33c5554819382e"} Nov 22 10:54:45 crc kubenswrapper[4772]: I1122 10:54:45.005118 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:45 crc kubenswrapper[4772]: I1122 10:54:45.006936 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" event={"ID":"9b18c846-e06c-4518-af8b-3e279bae816c","Type":"ContainerStarted","Data":"5cf3f11a813a45a8e34415d7e21371825c3f38c48d61218d833c4c65ac2c165b"} Nov 22 10:54:45 crc kubenswrapper[4772]: I1122 10:54:45.007066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" event={"ID":"9b18c846-e06c-4518-af8b-3e279bae816c","Type":"ContainerStarted","Data":"b0930b92277cdbba488b0485c759bac48d0f6c5bca952e1413454b9245a33a25"} Nov 22 10:54:45 crc kubenswrapper[4772]: I1122 10:54:45.022840 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" podStartSLOduration=2.2939657589999998 podStartE2EDuration="9.022822324s" podCreationTimestamp="2025-11-22 10:54:36 +0000 UTC" firstStartedPulling="2025-11-22 10:54:37.527175284 +0000 UTC m=+997.766619778" lastFinishedPulling="2025-11-22 10:54:44.256031849 +0000 UTC m=+1004.495476343" observedRunningTime="2025-11-22 10:54:45.018609307 +0000 UTC m=+1005.258053801" watchObservedRunningTime="2025-11-22 10:54:45.022822324 +0000 UTC m=+1005.262266818" Nov 22 10:54:45 crc kubenswrapper[4772]: I1122 10:54:45.032909 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-h2qvt" podStartSLOduration=5.03289137 podStartE2EDuration="5.03289137s" podCreationTimestamp="2025-11-22 10:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:54:45.030722945 +0000 UTC m=+1005.270167439" watchObservedRunningTime="2025-11-22 10:54:45.03289137 +0000 UTC m=+1005.272335864" Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.856927 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nttkb"] Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.857950 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.860451 4772 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5jrx4" Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.864817 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nttkb"] Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.883787 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6548e9e4-cb28-45c6-ba13-67d5abd67149-bound-sa-token\") pod \"cert-manager-86cb77c54b-nttkb\" (UID: \"6548e9e4-cb28-45c6-ba13-67d5abd67149\") " pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.883866 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmv6\" (UniqueName: \"kubernetes.io/projected/6548e9e4-cb28-45c6-ba13-67d5abd67149-kube-api-access-mrmv6\") pod \"cert-manager-86cb77c54b-nttkb\" (UID: \"6548e9e4-cb28-45c6-ba13-67d5abd67149\") " pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.984308 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrmv6\" (UniqueName: \"kubernetes.io/projected/6548e9e4-cb28-45c6-ba13-67d5abd67149-kube-api-access-mrmv6\") pod \"cert-manager-86cb77c54b-nttkb\" (UID: \"6548e9e4-cb28-45c6-ba13-67d5abd67149\") " pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:47 crc kubenswrapper[4772]: I1122 10:54:47.984383 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6548e9e4-cb28-45c6-ba13-67d5abd67149-bound-sa-token\") pod \"cert-manager-86cb77c54b-nttkb\" (UID: \"6548e9e4-cb28-45c6-ba13-67d5abd67149\") " pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:48 crc kubenswrapper[4772]: I1122 10:54:48.001700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6548e9e4-cb28-45c6-ba13-67d5abd67149-bound-sa-token\") pod \"cert-manager-86cb77c54b-nttkb\" (UID: \"6548e9e4-cb28-45c6-ba13-67d5abd67149\") " pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:48 crc kubenswrapper[4772]: I1122 10:54:48.001856 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrmv6\" (UniqueName: \"kubernetes.io/projected/6548e9e4-cb28-45c6-ba13-67d5abd67149-kube-api-access-mrmv6\") pod \"cert-manager-86cb77c54b-nttkb\" (UID: \"6548e9e4-cb28-45c6-ba13-67d5abd67149\") " pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:48 crc kubenswrapper[4772]: I1122 10:54:48.178704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nttkb" Nov 22 10:54:48 crc kubenswrapper[4772]: I1122 10:54:48.565175 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nttkb"] Nov 22 10:54:48 crc kubenswrapper[4772]: W1122 10:54:48.573906 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6548e9e4_cb28_45c6_ba13_67d5abd67149.slice/crio-a7a4603ff83ac33db479cb6b2ed5c37484550ef7cee58bb300d667c24815b4c9 WatchSource:0}: Error finding container a7a4603ff83ac33db479cb6b2ed5c37484550ef7cee58bb300d667c24815b4c9: Status 404 returned error can't find the container with id a7a4603ff83ac33db479cb6b2ed5c37484550ef7cee58bb300d667c24815b4c9 Nov 22 10:54:49 crc kubenswrapper[4772]: I1122 10:54:49.028916 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nttkb" event={"ID":"6548e9e4-cb28-45c6-ba13-67d5abd67149","Type":"ContainerStarted","Data":"efeccf87502c2906072d04f71eef09a34066c97c673659bcdcaab5dc52046901"} Nov 22 10:54:49 crc kubenswrapper[4772]: I1122 10:54:49.029455 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nttkb" event={"ID":"6548e9e4-cb28-45c6-ba13-67d5abd67149","Type":"ContainerStarted","Data":"a7a4603ff83ac33db479cb6b2ed5c37484550ef7cee58bb300d667c24815b4c9"} Nov 22 10:54:49 crc kubenswrapper[4772]: I1122 10:54:49.045842 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-nttkb" podStartSLOduration=2.045815103 podStartE2EDuration="2.045815103s" podCreationTimestamp="2025-11-22 10:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:54:49.042672833 +0000 UTC m=+1009.282117337" watchObservedRunningTime="2025-11-22 10:54:49.045815103 +0000 UTC m=+1009.285259597" Nov 22 10:54:52 crc kubenswrapper[4772]: I1122 10:54:52.145243 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-dzcx9" Nov 22 10:54:55 crc kubenswrapper[4772]: I1122 10:54:55.959676 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vp6cs"] Nov 22 10:54:55 crc kubenswrapper[4772]: I1122 10:54:55.960698 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vp6cs" Nov 22 10:54:55 crc kubenswrapper[4772]: I1122 10:54:55.962536 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 22 10:54:55 crc kubenswrapper[4772]: I1122 10:54:55.962897 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zrgmp" Nov 22 10:54:55 crc kubenswrapper[4772]: I1122 10:54:55.962923 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 22 10:54:55 crc kubenswrapper[4772]: I1122 10:54:55.974379 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vp6cs"] Nov 22 10:54:55 crc kubenswrapper[4772]: I1122 10:54:55.986655 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltvl\" (UniqueName: \"kubernetes.io/projected/7a9e39c5-c3ea-43c2-9551-f45d68ff5c44-kube-api-access-xltvl\") pod \"openstack-operator-index-vp6cs\" (UID: \"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44\") " pod="openstack-operators/openstack-operator-index-vp6cs" Nov 22 10:54:56 crc kubenswrapper[4772]: I1122 10:54:56.087925 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xltvl\" (UniqueName: \"kubernetes.io/projected/7a9e39c5-c3ea-43c2-9551-f45d68ff5c44-kube-api-access-xltvl\") pod \"openstack-operator-index-vp6cs\" (UID: \"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44\") " pod="openstack-operators/openstack-operator-index-vp6cs" Nov 22 10:54:56 crc kubenswrapper[4772]: I1122 10:54:56.106161 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xltvl\" (UniqueName: \"kubernetes.io/projected/7a9e39c5-c3ea-43c2-9551-f45d68ff5c44-kube-api-access-xltvl\") pod \"openstack-operator-index-vp6cs\" (UID: \"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44\") " pod="openstack-operators/openstack-operator-index-vp6cs" Nov 22 10:54:56 crc kubenswrapper[4772]: I1122 10:54:56.323252 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vp6cs" Nov 22 10:54:56 crc kubenswrapper[4772]: I1122 10:54:56.714722 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vp6cs"] Nov 22 10:54:57 crc kubenswrapper[4772]: I1122 10:54:57.077135 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vp6cs" event={"ID":"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44","Type":"ContainerStarted","Data":"e7d239189cf3dd382bfba13c3733f2b43639882bb1eb29d6bae1c1942d25263c"} Nov 22 10:54:59 crc kubenswrapper[4772]: I1122 10:54:59.138350 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vp6cs"] Nov 22 10:54:59 crc kubenswrapper[4772]: I1122 10:54:59.741497 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9vtrh"] Nov 22 10:54:59 crc kubenswrapper[4772]: I1122 10:54:59.742684 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:54:59 crc kubenswrapper[4772]: I1122 10:54:59.756162 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9vtrh"] Nov 22 10:54:59 crc kubenswrapper[4772]: I1122 10:54:59.838065 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lp7m\" (UniqueName: \"kubernetes.io/projected/0f9e1f7d-d872-45c9-92dc-5617ee96ed08-kube-api-access-6lp7m\") pod \"openstack-operator-index-9vtrh\" (UID: \"0f9e1f7d-d872-45c9-92dc-5617ee96ed08\") " pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:54:59 crc kubenswrapper[4772]: I1122 10:54:59.939305 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lp7m\" (UniqueName: \"kubernetes.io/projected/0f9e1f7d-d872-45c9-92dc-5617ee96ed08-kube-api-access-6lp7m\") pod \"openstack-operator-index-9vtrh\" (UID: \"0f9e1f7d-d872-45c9-92dc-5617ee96ed08\") " pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:54:59 crc kubenswrapper[4772]: I1122 10:54:59.956318 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lp7m\" (UniqueName: \"kubernetes.io/projected/0f9e1f7d-d872-45c9-92dc-5617ee96ed08-kube-api-access-6lp7m\") pod \"openstack-operator-index-9vtrh\" (UID: \"0f9e1f7d-d872-45c9-92dc-5617ee96ed08\") " pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.097540 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vp6cs" event={"ID":"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44","Type":"ContainerStarted","Data":"f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae"} Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.097632 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vp6cs" podUID="7a9e39c5-c3ea-43c2-9551-f45d68ff5c44" containerName="registry-server" containerID="cri-o://f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae" gracePeriod=2 Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.111822 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vp6cs" podStartSLOduration=2.813693604 podStartE2EDuration="5.111805852s" podCreationTimestamp="2025-11-22 10:54:55 +0000 UTC" firstStartedPulling="2025-11-22 10:54:56.733843171 +0000 UTC m=+1016.973287665" lastFinishedPulling="2025-11-22 10:54:59.031955419 +0000 UTC m=+1019.271399913" observedRunningTime="2025-11-22 10:55:00.109949185 +0000 UTC m=+1020.349393679" watchObservedRunningTime="2025-11-22 10:55:00.111805852 +0000 UTC m=+1020.351250366" Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.145798 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.495293 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vp6cs" Nov 22 10:55:00 crc kubenswrapper[4772]: W1122 10:55:00.523412 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9e1f7d_d872_45c9_92dc_5617ee96ed08.slice/crio-a996d1a4bb021d49d90e85abc6594e764c80744c0d66f3a044e6a96391e3e2c4 WatchSource:0}: Error finding container a996d1a4bb021d49d90e85abc6594e764c80744c0d66f3a044e6a96391e3e2c4: Status 404 returned error can't find the container with id a996d1a4bb021d49d90e85abc6594e764c80744c0d66f3a044e6a96391e3e2c4 Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.529096 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9vtrh"] Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.545541 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xltvl\" (UniqueName: \"kubernetes.io/projected/7a9e39c5-c3ea-43c2-9551-f45d68ff5c44-kube-api-access-xltvl\") pod \"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44\" (UID: \"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44\") " Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.550887 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9e39c5-c3ea-43c2-9551-f45d68ff5c44-kube-api-access-xltvl" (OuterVolumeSpecName: "kube-api-access-xltvl") pod "7a9e39c5-c3ea-43c2-9551-f45d68ff5c44" (UID: "7a9e39c5-c3ea-43c2-9551-f45d68ff5c44"). InnerVolumeSpecName "kube-api-access-xltvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:55:00 crc kubenswrapper[4772]: I1122 10:55:00.646411 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xltvl\" (UniqueName: \"kubernetes.io/projected/7a9e39c5-c3ea-43c2-9551-f45d68ff5c44-kube-api-access-xltvl\") on node \"crc\" DevicePath \"\"" Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.105026 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9vtrh" event={"ID":"0f9e1f7d-d872-45c9-92dc-5617ee96ed08","Type":"ContainerStarted","Data":"ff01acc6abff3e83d0efda2f085210125e34e71c03d9ca4f6cc73c08e78645b8"} Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.105478 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9vtrh" event={"ID":"0f9e1f7d-d872-45c9-92dc-5617ee96ed08","Type":"ContainerStarted","Data":"a996d1a4bb021d49d90e85abc6594e764c80744c0d66f3a044e6a96391e3e2c4"} Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.106161 4772 generic.go:334] "Generic (PLEG): container finished" podID="7a9e39c5-c3ea-43c2-9551-f45d68ff5c44" containerID="f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae" exitCode=0 Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.106190 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vp6cs" event={"ID":"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44","Type":"ContainerDied","Data":"f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae"} Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.106207 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vp6cs" event={"ID":"7a9e39c5-c3ea-43c2-9551-f45d68ff5c44","Type":"ContainerDied","Data":"e7d239189cf3dd382bfba13c3733f2b43639882bb1eb29d6bae1c1942d25263c"} Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.106223 4772 scope.go:117] "RemoveContainer" containerID="f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae" Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.106249 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vp6cs" Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.129122 4772 scope.go:117] "RemoveContainer" containerID="f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae" Nov 22 10:55:01 crc kubenswrapper[4772]: E1122 10:55:01.129548 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae\": container with ID starting with f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae not found: ID does not exist" containerID="f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae" Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.129601 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae"} err="failed to get container status \"f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae\": rpc error: code = NotFound desc = could not find container \"f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae\": container with ID starting with f32a954cf567fa7231530dbb4c894882a7f6c935bd30811bc921cc17181483ae not found: ID does not exist" Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.132323 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9vtrh" podStartSLOduration=2.069335671 podStartE2EDuration="2.132300894s" podCreationTimestamp="2025-11-22 10:54:59 +0000 UTC" firstStartedPulling="2025-11-22 10:55:00.52938192 +0000 UTC m=+1020.768826414" lastFinishedPulling="2025-11-22 10:55:00.592347123 +0000 UTC m=+1020.831791637" observedRunningTime="2025-11-22 10:55:01.123746896 +0000 UTC m=+1021.363191440" watchObservedRunningTime="2025-11-22 10:55:01.132300894 +0000 UTC m=+1021.371745388" Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.155720 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vp6cs"] Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.164418 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vp6cs"] Nov 22 10:55:01 crc kubenswrapper[4772]: I1122 10:55:01.422671 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9e39c5-c3ea-43c2-9551-f45d68ff5c44" path="/var/lib/kubelet/pods/7a9e39c5-c3ea-43c2-9551-f45d68ff5c44/volumes" Nov 22 10:55:10 crc kubenswrapper[4772]: I1122 10:55:10.145924 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:55:10 crc kubenswrapper[4772]: I1122 10:55:10.146551 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:55:10 crc kubenswrapper[4772]: I1122 10:55:10.173212 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:55:10 crc kubenswrapper[4772]: I1122 10:55:10.202866 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-9vtrh" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.872655 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4"] Nov 22 10:55:15 crc kubenswrapper[4772]: E1122 10:55:15.877590 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9e39c5-c3ea-43c2-9551-f45d68ff5c44" containerName="registry-server" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.877638 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9e39c5-c3ea-43c2-9551-f45d68ff5c44" containerName="registry-server" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.877910 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9e39c5-c3ea-43c2-9551-f45d68ff5c44" containerName="registry-server" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.879406 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4"] Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.879536 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.883185 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nt9nn" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.947816 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-bundle\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.947942 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-util\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:15 crc kubenswrapper[4772]: I1122 10:55:15.947986 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtfcr\" (UniqueName: \"kubernetes.io/projected/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-kube-api-access-xtfcr\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.049486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-bundle\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.049569 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-util\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.049604 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtfcr\" (UniqueName: \"kubernetes.io/projected/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-kube-api-access-xtfcr\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.050098 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-bundle\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.050392 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-util\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.072004 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtfcr\" (UniqueName: \"kubernetes.io/projected/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-kube-api-access-xtfcr\") pod \"1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.210210 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:16 crc kubenswrapper[4772]: I1122 10:55:16.647126 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4"] Nov 22 10:55:17 crc kubenswrapper[4772]: I1122 10:55:17.204597 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" event={"ID":"66f7c9f6-e1b5-4e01-956f-568be1f1fec3","Type":"ContainerStarted","Data":"aa4f8511277da3df4e3d9d689222e996d2ae399746fa41c077bb9b4bd3c0bc8e"} Nov 22 10:55:20 crc kubenswrapper[4772]: I1122 10:55:20.227669 4772 generic.go:334] "Generic (PLEG): container finished" podID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerID="2ca39e3fac17a9106f10ec2065289053a3d21e0dd9d6fa43973093132b9464ec" exitCode=0 Nov 22 10:55:20 crc kubenswrapper[4772]: I1122 10:55:20.227896 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" event={"ID":"66f7c9f6-e1b5-4e01-956f-568be1f1fec3","Type":"ContainerDied","Data":"2ca39e3fac17a9106f10ec2065289053a3d21e0dd9d6fa43973093132b9464ec"} Nov 22 10:55:27 crc kubenswrapper[4772]: I1122 10:55:27.278749 4772 generic.go:334] "Generic (PLEG): container finished" podID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerID="a3313d3e991a9aac46e977fa6495f356ea116626bb28cdaaff09bbc2ff0689d7" exitCode=0 Nov 22 10:55:27 crc kubenswrapper[4772]: I1122 10:55:27.278815 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" event={"ID":"66f7c9f6-e1b5-4e01-956f-568be1f1fec3","Type":"ContainerDied","Data":"a3313d3e991a9aac46e977fa6495f356ea116626bb28cdaaff09bbc2ff0689d7"} Nov 22 10:55:28 crc kubenswrapper[4772]: I1122 10:55:28.287237 4772 generic.go:334] "Generic (PLEG): container finished" podID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerID="6d809e37f57805c3d6d505c66cc8869d96021143dd5987f39dc090c235332a0e" exitCode=0 Nov 22 10:55:28 crc kubenswrapper[4772]: I1122 10:55:28.287318 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" event={"ID":"66f7c9f6-e1b5-4e01-956f-568be1f1fec3","Type":"ContainerDied","Data":"6d809e37f57805c3d6d505c66cc8869d96021143dd5987f39dc090c235332a0e"} Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.515439 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.625619 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-bundle\") pod \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.625707 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-util\") pod \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.625740 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtfcr\" (UniqueName: \"kubernetes.io/projected/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-kube-api-access-xtfcr\") pod \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\" (UID: \"66f7c9f6-e1b5-4e01-956f-568be1f1fec3\") " Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.626384 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-bundle" (OuterVolumeSpecName: "bundle") pod "66f7c9f6-e1b5-4e01-956f-568be1f1fec3" (UID: "66f7c9f6-e1b5-4e01-956f-568be1f1fec3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.632185 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-kube-api-access-xtfcr" (OuterVolumeSpecName: "kube-api-access-xtfcr") pod "66f7c9f6-e1b5-4e01-956f-568be1f1fec3" (UID: "66f7c9f6-e1b5-4e01-956f-568be1f1fec3"). InnerVolumeSpecName "kube-api-access-xtfcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.635509 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-util" (OuterVolumeSpecName: "util") pod "66f7c9f6-e1b5-4e01-956f-568be1f1fec3" (UID: "66f7c9f6-e1b5-4e01-956f-568be1f1fec3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.727739 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.727768 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-util\") on node \"crc\" DevicePath \"\"" Nov 22 10:55:29 crc kubenswrapper[4772]: I1122 10:55:29.727782 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtfcr\" (UniqueName: \"kubernetes.io/projected/66f7c9f6-e1b5-4e01-956f-568be1f1fec3-kube-api-access-xtfcr\") on node \"crc\" DevicePath \"\"" Nov 22 10:55:30 crc kubenswrapper[4772]: I1122 10:55:30.303963 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" event={"ID":"66f7c9f6-e1b5-4e01-956f-568be1f1fec3","Type":"ContainerDied","Data":"aa4f8511277da3df4e3d9d689222e996d2ae399746fa41c077bb9b4bd3c0bc8e"} Nov 22 10:55:30 crc kubenswrapper[4772]: I1122 10:55:30.304019 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa4f8511277da3df4e3d9d689222e996d2ae399746fa41c077bb9b4bd3c0bc8e" Nov 22 10:55:30 crc kubenswrapper[4772]: I1122 10:55:30.304110 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.269504 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p"] Nov 22 10:55:33 crc kubenswrapper[4772]: E1122 10:55:33.270287 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerName="pull" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.270305 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerName="pull" Nov 22 10:55:33 crc kubenswrapper[4772]: E1122 10:55:33.270325 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerName="util" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.270334 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerName="util" Nov 22 10:55:33 crc kubenswrapper[4772]: E1122 10:55:33.270345 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerName="extract" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.270357 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerName="extract" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.270507 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f7c9f6-e1b5-4e01-956f-568be1f1fec3" containerName="extract" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.271328 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.274478 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-hvpvc" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.302389 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p"] Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.376128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv2w9\" (UniqueName: \"kubernetes.io/projected/f091bcc0-d8a3-4795-b81e-21ec2a91958e-kube-api-access-cv2w9\") pod \"openstack-operator-controller-operator-6d45d44995-lh69p\" (UID: \"f091bcc0-d8a3-4795-b81e-21ec2a91958e\") " pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.476974 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv2w9\" (UniqueName: \"kubernetes.io/projected/f091bcc0-d8a3-4795-b81e-21ec2a91958e-kube-api-access-cv2w9\") pod \"openstack-operator-controller-operator-6d45d44995-lh69p\" (UID: \"f091bcc0-d8a3-4795-b81e-21ec2a91958e\") " pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.501883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv2w9\" (UniqueName: \"kubernetes.io/projected/f091bcc0-d8a3-4795-b81e-21ec2a91958e-kube-api-access-cv2w9\") pod \"openstack-operator-controller-operator-6d45d44995-lh69p\" (UID: \"f091bcc0-d8a3-4795-b81e-21ec2a91958e\") " pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.590989 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" Nov 22 10:55:33 crc kubenswrapper[4772]: I1122 10:55:33.828709 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p"] Nov 22 10:55:34 crc kubenswrapper[4772]: I1122 10:55:34.326705 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" event={"ID":"f091bcc0-d8a3-4795-b81e-21ec2a91958e","Type":"ContainerStarted","Data":"6a09cbddbf270cd1eba45c6fedc2ad3e9ba09bf414785e89da4e00c49b1b8ad1"} Nov 22 10:55:39 crc kubenswrapper[4772]: I1122 10:55:39.359985 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" event={"ID":"f091bcc0-d8a3-4795-b81e-21ec2a91958e","Type":"ContainerStarted","Data":"daf907800d66902e330ab8bd3b2091b13a85fbf8770bcb5ae0234bd0b2a5d38d"} Nov 22 10:55:42 crc kubenswrapper[4772]: I1122 10:55:42.379135 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" event={"ID":"f091bcc0-d8a3-4795-b81e-21ec2a91958e","Type":"ContainerStarted","Data":"ca38d22d78d9b81a4eb22931be5ccf4ec61dff02bcf94287488525b41453d6f5"} Nov 22 10:55:42 crc kubenswrapper[4772]: I1122 10:55:42.379812 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" Nov 22 10:55:42 crc kubenswrapper[4772]: I1122 10:55:42.406923 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" podStartSLOduration=1.8640390230000001 podStartE2EDuration="9.406904491s" podCreationTimestamp="2025-11-22 10:55:33 +0000 UTC" firstStartedPulling="2025-11-22 10:55:33.842770405 +0000 UTC m=+1054.082214899" lastFinishedPulling="2025-11-22 10:55:41.385635873 +0000 UTC m=+1061.625080367" observedRunningTime="2025-11-22 10:55:42.405472665 +0000 UTC m=+1062.644917179" watchObservedRunningTime="2025-11-22 10:55:42.406904491 +0000 UTC m=+1062.646348995" Nov 22 10:55:43 crc kubenswrapper[4772]: I1122 10:55:43.387167 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6d45d44995-lh69p" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.705261 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.706780 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.708417 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-tmvfp" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.710387 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.711334 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.713220 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dsb88" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.721755 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.727996 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.729284 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.731025 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-f6dzn" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.732788 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.743944 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.760672 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.761604 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.764941 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sn5mp" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.780418 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.787858 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.789032 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.792303 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5w5ld" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.809884 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.882894 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.884073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tn7v\" (UniqueName: \"kubernetes.io/projected/19381059-85ea-461e-baca-f0f511fdb677-kube-api-access-2tn7v\") pod \"barbican-operator-controller-manager-5bfbbb859d-bjmld\" (UID: \"19381059-85ea-461e-baca-f0f511fdb677\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.884274 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2rd\" (UniqueName: \"kubernetes.io/projected/b90351ff-d9b5-4d42-b7ef-915a5bd4251d-kube-api-access-dz2rd\") pod \"glance-operator-controller-manager-6f95d84fd6-bwjrb\" (UID: \"b90351ff-d9b5-4d42-b7ef-915a5bd4251d\") " pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.884319 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.884416 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwmf9\" (UniqueName: \"kubernetes.io/projected/89c6d87d-3d72-43b9-a56e-bb94322cc856-kube-api-access-bwmf9\") pod \"designate-operator-controller-manager-6788cc6d75-9bnwv\" (UID: \"89c6d87d-3d72-43b9-a56e-bb94322cc856\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.884554 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcd8q\" (UniqueName: \"kubernetes.io/projected/8085093b-cd6b-4ef5-9935-82eb224499c2-kube-api-access-bcd8q\") pod \"cinder-operator-controller-manager-748967c98-kx6h5\" (UID: \"8085093b-cd6b-4ef5-9935-82eb224499c2\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.884683 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878tj\" (UniqueName: \"kubernetes.io/projected/2abf13f1-7bd8-4ea6-85a3-5c5658de7f48-kube-api-access-878tj\") pod \"heat-operator-controller-manager-698d6fd7d6-x46c7\" (UID: \"2abf13f1-7bd8-4ea6-85a3-5c5658de7f48\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.887155 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5q88k" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.924111 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.936122 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.937522 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.943973 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.944591 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kp2jr" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.945101 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.947062 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tcgx9" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.947218 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.979863 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.986912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878tj\" (UniqueName: \"kubernetes.io/projected/2abf13f1-7bd8-4ea6-85a3-5c5658de7f48-kube-api-access-878tj\") pod \"heat-operator-controller-manager-698d6fd7d6-x46c7\" (UID: \"2abf13f1-7bd8-4ea6-85a3-5c5658de7f48\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.986972 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m584t\" (UniqueName: \"kubernetes.io/projected/ede58bb8-1f1d-4637-a89a-5075266ea932-kube-api-access-m584t\") pod \"horizon-operator-controller-manager-7d5d9fd47f-xk9c8\" (UID: \"ede58bb8-1f1d-4637-a89a-5075266ea932\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.986993 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwphs\" (UniqueName: \"kubernetes.io/projected/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-kube-api-access-cwphs\") pod \"infra-operator-controller-manager-6c55d8d69b-w8q6b\" (UID: \"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.987036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tn7v\" (UniqueName: \"kubernetes.io/projected/19381059-85ea-461e-baca-f0f511fdb677-kube-api-access-2tn7v\") pod \"barbican-operator-controller-manager-5bfbbb859d-bjmld\" (UID: \"19381059-85ea-461e-baca-f0f511fdb677\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.987562 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-cert\") pod \"infra-operator-controller-manager-6c55d8d69b-w8q6b\" (UID: \"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.987753 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz2rd\" (UniqueName: \"kubernetes.io/projected/b90351ff-d9b5-4d42-b7ef-915a5bd4251d-kube-api-access-dz2rd\") pod \"glance-operator-controller-manager-6f95d84fd6-bwjrb\" (UID: \"b90351ff-d9b5-4d42-b7ef-915a5bd4251d\") " pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.987814 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwmf9\" (UniqueName: \"kubernetes.io/projected/89c6d87d-3d72-43b9-a56e-bb94322cc856-kube-api-access-bwmf9\") pod \"designate-operator-controller-manager-6788cc6d75-9bnwv\" (UID: \"89c6d87d-3d72-43b9-a56e-bb94322cc856\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.987853 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcd8q\" (UniqueName: \"kubernetes.io/projected/8085093b-cd6b-4ef5-9935-82eb224499c2-kube-api-access-bcd8q\") pod \"cinder-operator-controller-manager-748967c98-kx6h5\" (UID: \"8085093b-cd6b-4ef5-9935-82eb224499c2\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.987902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmktf\" (UniqueName: \"kubernetes.io/projected/66c75ee1-78ca-448b-a2f0-0946014f82ff-kube-api-access-pmktf\") pod \"ironic-operator-controller-manager-54485f899-nhqm7\" (UID: \"66c75ee1-78ca-448b-a2f0-0946014f82ff\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.990157 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z"] Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.991817 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.998125 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7n2gq" Nov 22 10:56:14 crc kubenswrapper[4772]: I1122 10:56:14.998633 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.009900 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.015967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878tj\" (UniqueName: \"kubernetes.io/projected/2abf13f1-7bd8-4ea6-85a3-5c5658de7f48-kube-api-access-878tj\") pod \"heat-operator-controller-manager-698d6fd7d6-x46c7\" (UID: \"2abf13f1-7bd8-4ea6-85a3-5c5658de7f48\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.018682 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcd8q\" (UniqueName: \"kubernetes.io/projected/8085093b-cd6b-4ef5-9935-82eb224499c2-kube-api-access-bcd8q\") pod \"cinder-operator-controller-manager-748967c98-kx6h5\" (UID: \"8085093b-cd6b-4ef5-9935-82eb224499c2\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.019450 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-wwt28"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.020690 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.025298 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.025434 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz2rd\" (UniqueName: \"kubernetes.io/projected/b90351ff-d9b5-4d42-b7ef-915a5bd4251d-kube-api-access-dz2rd\") pod \"glance-operator-controller-manager-6f95d84fd6-bwjrb\" (UID: \"b90351ff-d9b5-4d42-b7ef-915a5bd4251d\") " pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.026471 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.035672 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tn7v\" (UniqueName: \"kubernetes.io/projected/19381059-85ea-461e-baca-f0f511fdb677-kube-api-access-2tn7v\") pod \"barbican-operator-controller-manager-5bfbbb859d-bjmld\" (UID: \"19381059-85ea-461e-baca-f0f511fdb677\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.036543 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-q48sz" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.036835 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rsh96" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.053162 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.066902 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.067476 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwmf9\" (UniqueName: \"kubernetes.io/projected/89c6d87d-3d72-43b9-a56e-bb94322cc856-kube-api-access-bwmf9\") pod \"designate-operator-controller-manager-6788cc6d75-9bnwv\" (UID: \"89c6d87d-3d72-43b9-a56e-bb94322cc856\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.068154 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.070753 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mnznz" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.087396 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.088477 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687rk\" (UniqueName: \"kubernetes.io/projected/bc5e47a3-441e-4076-b726-99bb8cd36d95-kube-api-access-687rk\") pod \"manila-operator-controller-manager-646fd589f9-jpzwd\" (UID: \"bc5e47a3-441e-4076-b726-99bb8cd36d95\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.088522 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxcm4\" (UniqueName: \"kubernetes.io/projected/8044f4e1-088e-4e18-a9c4-a35265e4b62a-kube-api-access-vxcm4\") pod \"keystone-operator-controller-manager-79cc9d59f5-rvc7z\" (UID: \"8044f4e1-088e-4e18-a9c4-a35265e4b62a\") " pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.088768 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.088852 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmktf\" (UniqueName: \"kubernetes.io/projected/66c75ee1-78ca-448b-a2f0-0946014f82ff-kube-api-access-pmktf\") pod \"ironic-operator-controller-manager-54485f899-nhqm7\" (UID: \"66c75ee1-78ca-448b-a2f0-0946014f82ff\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.088885 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m584t\" (UniqueName: \"kubernetes.io/projected/ede58bb8-1f1d-4637-a89a-5075266ea932-kube-api-access-m584t\") pod \"horizon-operator-controller-manager-7d5d9fd47f-xk9c8\" (UID: \"ede58bb8-1f1d-4637-a89a-5075266ea932\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.088906 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwphs\" (UniqueName: \"kubernetes.io/projected/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-kube-api-access-cwphs\") pod \"infra-operator-controller-manager-6c55d8d69b-w8q6b\" (UID: \"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.088935 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlnd\" (UniqueName: \"kubernetes.io/projected/58546700-44a8-45ad-bbbc-1ee40a090fd7-kube-api-access-mvlnd\") pod \"neutron-operator-controller-manager-58879495c-wwt28\" (UID: \"58546700-44a8-45ad-bbbc-1ee40a090fd7\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.089080 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-cert\") pod \"infra-operator-controller-manager-6c55d8d69b-w8q6b\" (UID: \"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.089111 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2kz8\" (UniqueName: \"kubernetes.io/projected/09182dc6-4ee5-4ad8-9298-f13a7037ac9b-kube-api-access-q2kz8\") pod \"mariadb-operator-controller-manager-64d7c556cd-8772n\" (UID: \"09182dc6-4ee5-4ad8-9298-f13a7037ac9b\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.089230 4772 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.089285 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-cert podName:7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9 nodeName:}" failed. No retries permitted until 2025-11-22 10:56:15.589267582 +0000 UTC m=+1095.828712076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-cert") pod "infra-operator-controller-manager-6c55d8d69b-w8q6b" (UID: "7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9") : secret "infra-operator-webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.099944 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-wwt28"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.108146 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m584t\" (UniqueName: \"kubernetes.io/projected/ede58bb8-1f1d-4637-a89a-5075266ea932-kube-api-access-m584t\") pod \"horizon-operator-controller-manager-7d5d9fd47f-xk9c8\" (UID: \"ede58bb8-1f1d-4637-a89a-5075266ea932\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.108325 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.108809 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.109220 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.109910 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.110696 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.112209 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2s77n" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.112559 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwphs\" (UniqueName: \"kubernetes.io/projected/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-kube-api-access-cwphs\") pod \"infra-operator-controller-manager-6c55d8d69b-w8q6b\" (UID: \"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.116815 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-p5b4x" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.123283 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.128995 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.135244 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.135954 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmktf\" (UniqueName: \"kubernetes.io/projected/66c75ee1-78ca-448b-a2f0-0946014f82ff-kube-api-access-pmktf\") pod \"ironic-operator-controller-manager-54485f899-nhqm7\" (UID: \"66c75ee1-78ca-448b-a2f0-0946014f82ff\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.154877 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.155958 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.158032 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xm22r" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.162843 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.164140 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.168815 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xwrz4" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.180558 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.181544 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.183161 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.184861 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.188080 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7j5sp" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.189819 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8dsf\" (UniqueName: \"kubernetes.io/projected/1d228e15-a43a-4b2a-b8a5-958a6ce484a7-kube-api-access-g8dsf\") pod \"nova-operator-controller-manager-79d658b66d-cgk6m\" (UID: \"1d228e15-a43a-4b2a-b8a5-958a6ce484a7\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.189858 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.189890 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdw8c\" (UniqueName: \"kubernetes.io/projected/ed653c9f-8aac-4989-bc46-169893057f90-kube-api-access-zdw8c\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.189916 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlnd\" (UniqueName: \"kubernetes.io/projected/58546700-44a8-45ad-bbbc-1ee40a090fd7-kube-api-access-mvlnd\") pod \"neutron-operator-controller-manager-58879495c-wwt28\" (UID: \"58546700-44a8-45ad-bbbc-1ee40a090fd7\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.189954 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68nz\" (UniqueName: \"kubernetes.io/projected/db103267-50bd-4819-a33e-90a787ddb249-kube-api-access-l68nz\") pod \"ovn-operator-controller-manager-5b67cfc8fb-7r9qs\" (UID: \"db103267-50bd-4819-a33e-90a787ddb249\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.189980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2kz8\" (UniqueName: \"kubernetes.io/projected/09182dc6-4ee5-4ad8-9298-f13a7037ac9b-kube-api-access-q2kz8\") pod \"mariadb-operator-controller-manager-64d7c556cd-8772n\" (UID: \"09182dc6-4ee5-4ad8-9298-f13a7037ac9b\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.190010 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-687rk\" (UniqueName: \"kubernetes.io/projected/bc5e47a3-441e-4076-b726-99bb8cd36d95-kube-api-access-687rk\") pod \"manila-operator-controller-manager-646fd589f9-jpzwd\" (UID: \"bc5e47a3-441e-4076-b726-99bb8cd36d95\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.190120 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzjc\" (UniqueName: \"kubernetes.io/projected/1037af09-c926-409c-9732-26cf293cc210-kube-api-access-zrzjc\") pod \"placement-operator-controller-manager-867d87977b-jxm8x\" (UID: \"1037af09-c926-409c-9732-26cf293cc210\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.190139 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxcm4\" (UniqueName: \"kubernetes.io/projected/8044f4e1-088e-4e18-a9c4-a35265e4b62a-kube-api-access-vxcm4\") pod \"keystone-operator-controller-manager-79cc9d59f5-rvc7z\" (UID: \"8044f4e1-088e-4e18-a9c4-a35265e4b62a\") " pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.190175 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9gf\" (UniqueName: \"kubernetes.io/projected/7f92807d-a811-4354-a9b3-4efe75db8096-kube-api-access-8w9gf\") pod \"octavia-operator-controller-manager-d5fb87cb8-v8z2p\" (UID: \"7f92807d-a811-4354-a9b3-4efe75db8096\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.193033 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.208396 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.216577 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxcm4\" (UniqueName: \"kubernetes.io/projected/8044f4e1-088e-4e18-a9c4-a35265e4b62a-kube-api-access-vxcm4\") pod \"keystone-operator-controller-manager-79cc9d59f5-rvc7z\" (UID: \"8044f4e1-088e-4e18-a9c4-a35265e4b62a\") " pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.218072 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-687rk\" (UniqueName: \"kubernetes.io/projected/bc5e47a3-441e-4076-b726-99bb8cd36d95-kube-api-access-687rk\") pod \"manila-operator-controller-manager-646fd589f9-jpzwd\" (UID: \"bc5e47a3-441e-4076-b726-99bb8cd36d95\") " pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.221892 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2kz8\" (UniqueName: \"kubernetes.io/projected/09182dc6-4ee5-4ad8-9298-f13a7037ac9b-kube-api-access-q2kz8\") pod \"mariadb-operator-controller-manager-64d7c556cd-8772n\" (UID: \"09182dc6-4ee5-4ad8-9298-f13a7037ac9b\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.227786 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlnd\" (UniqueName: \"kubernetes.io/projected/58546700-44a8-45ad-bbbc-1ee40a090fd7-kube-api-access-mvlnd\") pod \"neutron-operator-controller-manager-58879495c-wwt28\" (UID: \"58546700-44a8-45ad-bbbc-1ee40a090fd7\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.238778 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.264362 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.281917 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.292449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68nz\" (UniqueName: \"kubernetes.io/projected/db103267-50bd-4819-a33e-90a787ddb249-kube-api-access-l68nz\") pod \"ovn-operator-controller-manager-5b67cfc8fb-7r9qs\" (UID: \"db103267-50bd-4819-a33e-90a787ddb249\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.292553 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrzjc\" (UniqueName: \"kubernetes.io/projected/1037af09-c926-409c-9732-26cf293cc210-kube-api-access-zrzjc\") pod \"placement-operator-controller-manager-867d87977b-jxm8x\" (UID: \"1037af09-c926-409c-9732-26cf293cc210\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.292587 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w9gf\" (UniqueName: \"kubernetes.io/projected/7f92807d-a811-4354-a9b3-4efe75db8096-kube-api-access-8w9gf\") pod \"octavia-operator-controller-manager-d5fb87cb8-v8z2p\" (UID: \"7f92807d-a811-4354-a9b3-4efe75db8096\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.292614 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8dsf\" (UniqueName: \"kubernetes.io/projected/1d228e15-a43a-4b2a-b8a5-958a6ce484a7-kube-api-access-g8dsf\") pod \"nova-operator-controller-manager-79d658b66d-cgk6m\" (UID: \"1d228e15-a43a-4b2a-b8a5-958a6ce484a7\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.292643 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.292678 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdw8c\" (UniqueName: \"kubernetes.io/projected/ed653c9f-8aac-4989-bc46-169893057f90-kube-api-access-zdw8c\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.294419 4772 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.294523 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert podName:ed653c9f-8aac-4989-bc46-169893057f90 nodeName:}" failed. No retries permitted until 2025-11-22 10:56:15.794495184 +0000 UTC m=+1096.033939678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert") pod "openstack-baremetal-operator-controller-manager-77868f484-nn7dj" (UID: "ed653c9f-8aac-4989-bc46-169893057f90") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.338931 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68nz\" (UniqueName: \"kubernetes.io/projected/db103267-50bd-4819-a33e-90a787ddb249-kube-api-access-l68nz\") pod \"ovn-operator-controller-manager-5b67cfc8fb-7r9qs\" (UID: \"db103267-50bd-4819-a33e-90a787ddb249\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.344452 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.349984 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdw8c\" (UniqueName: \"kubernetes.io/projected/ed653c9f-8aac-4989-bc46-169893057f90-kube-api-access-zdw8c\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.350937 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrzjc\" (UniqueName: \"kubernetes.io/projected/1037af09-c926-409c-9732-26cf293cc210-kube-api-access-zrzjc\") pod \"placement-operator-controller-manager-867d87977b-jxm8x\" (UID: \"1037af09-c926-409c-9732-26cf293cc210\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.356854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w9gf\" (UniqueName: \"kubernetes.io/projected/7f92807d-a811-4354-a9b3-4efe75db8096-kube-api-access-8w9gf\") pod \"octavia-operator-controller-manager-d5fb87cb8-v8z2p\" (UID: \"7f92807d-a811-4354-a9b3-4efe75db8096\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.366572 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.368428 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.370110 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8dsf\" (UniqueName: \"kubernetes.io/projected/1d228e15-a43a-4b2a-b8a5-958a6ce484a7-kube-api-access-g8dsf\") pod \"nova-operator-controller-manager-79d658b66d-cgk6m\" (UID: \"1d228e15-a43a-4b2a-b8a5-958a6ce484a7\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.376558 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.381251 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.385617 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.388145 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-258hk" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.419935 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.421072 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.445095 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.462258 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.464412 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.465348 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.465419 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.468611 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pnwwh" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.495259 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpzs\" (UniqueName: \"kubernetes.io/projected/0e43cbd9-c19e-4747-b833-39529cfa3d9d-kube-api-access-scpzs\") pod \"swift-operator-controller-manager-8f6687c44-zv2pz\" (UID: \"0e43cbd9-c19e-4747-b833-39529cfa3d9d\") " pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.504201 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.505947 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.513440 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-k8qkh" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.520839 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.545160 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.546260 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.550897 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2ml2j" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.559488 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.576430 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.597411 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6gb8\" (UniqueName: \"kubernetes.io/projected/bc363239-f347-4379-8fb3-e499b555a263-kube-api-access-b6gb8\") pod \"test-operator-controller-manager-77db6bf9c-2d4vx\" (UID: \"bc363239-f347-4379-8fb3-e499b555a263\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.597483 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-cert\") pod \"infra-operator-controller-manager-6c55d8d69b-w8q6b\" (UID: \"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.597502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scpzs\" (UniqueName: \"kubernetes.io/projected/0e43cbd9-c19e-4747-b833-39529cfa3d9d-kube-api-access-scpzs\") pod \"swift-operator-controller-manager-8f6687c44-zv2pz\" (UID: \"0e43cbd9-c19e-4747-b833-39529cfa3d9d\") " pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.597550 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpsp\" (UniqueName: \"kubernetes.io/projected/8456f23d-dc5a-4ddf-b853-46fcc56593e8-kube-api-access-wlpsp\") pod \"telemetry-operator-controller-manager-695797c565-wr5mj\" (UID: \"8456f23d-dc5a-4ddf-b853-46fcc56593e8\") " pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.602710 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.603267 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9-cert\") pod \"infra-operator-controller-manager-6c55d8d69b-w8q6b\" (UID: \"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.604440 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.610681 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.611243 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f69wg" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.611859 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.649837 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.656230 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.657136 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.674884 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.676349 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpzs\" (UniqueName: \"kubernetes.io/projected/0e43cbd9-c19e-4747-b833-39529cfa3d9d-kube-api-access-scpzs\") pod \"swift-operator-controller-manager-8f6687c44-zv2pz\" (UID: \"0e43cbd9-c19e-4747-b833-39529cfa3d9d\") " pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.697725 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7xxs9" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.701613 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdpzt\" (UniqueName: \"kubernetes.io/projected/2ec76c17-6475-4349-8aaa-47c8b6caa08e-kube-api-access-hdpzt\") pod \"openstack-operator-controller-manager-7f4bc68b84-vxcp7\" (UID: \"2ec76c17-6475-4349-8aaa-47c8b6caa08e\") " pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.701679 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6gb8\" (UniqueName: \"kubernetes.io/projected/bc363239-f347-4379-8fb3-e499b555a263-kube-api-access-b6gb8\") pod \"test-operator-controller-manager-77db6bf9c-2d4vx\" (UID: \"bc363239-f347-4379-8fb3-e499b555a263\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.701698 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64dg2\" (UniqueName: \"kubernetes.io/projected/e7a30218-add6-4170-948c-b6b9f8b960c8-kube-api-access-64dg2\") pod \"watcher-operator-controller-manager-6b56b8849f-wblft\" (UID: \"e7a30218-add6-4170-948c-b6b9f8b960c8\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.701746 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlpsp\" (UniqueName: \"kubernetes.io/projected/8456f23d-dc5a-4ddf-b853-46fcc56593e8-kube-api-access-wlpsp\") pod \"telemetry-operator-controller-manager-695797c565-wr5mj\" (UID: \"8456f23d-dc5a-4ddf-b853-46fcc56593e8\") " pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.701771 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec76c17-6475-4349-8aaa-47c8b6caa08e-cert\") pod \"openstack-operator-controller-manager-7f4bc68b84-vxcp7\" (UID: \"2ec76c17-6475-4349-8aaa-47c8b6caa08e\") " pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.736694 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6gb8\" (UniqueName: \"kubernetes.io/projected/bc363239-f347-4379-8fb3-e499b555a263-kube-api-access-b6gb8\") pod \"test-operator-controller-manager-77db6bf9c-2d4vx\" (UID: \"bc363239-f347-4379-8fb3-e499b555a263\") " pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.767836 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlpsp\" (UniqueName: \"kubernetes.io/projected/8456f23d-dc5a-4ddf-b853-46fcc56593e8-kube-api-access-wlpsp\") pod \"telemetry-operator-controller-manager-695797c565-wr5mj\" (UID: \"8456f23d-dc5a-4ddf-b853-46fcc56593e8\") " pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.778631 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.788643 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.803542 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec76c17-6475-4349-8aaa-47c8b6caa08e-cert\") pod \"openstack-operator-controller-manager-7f4bc68b84-vxcp7\" (UID: \"2ec76c17-6475-4349-8aaa-47c8b6caa08e\") " pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.803616 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdpzt\" (UniqueName: \"kubernetes.io/projected/2ec76c17-6475-4349-8aaa-47c8b6caa08e-kube-api-access-hdpzt\") pod \"openstack-operator-controller-manager-7f4bc68b84-vxcp7\" (UID: \"2ec76c17-6475-4349-8aaa-47c8b6caa08e\") " pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.803646 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.803692 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64dg2\" (UniqueName: \"kubernetes.io/projected/e7a30218-add6-4170-948c-b6b9f8b960c8-kube-api-access-64dg2\") pod \"watcher-operator-controller-manager-6b56b8849f-wblft\" (UID: \"e7a30218-add6-4170-948c-b6b9f8b960c8\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.803731 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgv7\" (UniqueName: \"kubernetes.io/projected/7823e2ab-1a7a-4a3f-9749-04c705f4336e-kube-api-access-xwgv7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw\" (UID: \"7823e2ab-1a7a-4a3f-9749-04c705f4336e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.803869 4772 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.803925 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec76c17-6475-4349-8aaa-47c8b6caa08e-cert podName:2ec76c17-6475-4349-8aaa-47c8b6caa08e nodeName:}" failed. No retries permitted until 2025-11-22 10:56:16.303902692 +0000 UTC m=+1096.543347196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec76c17-6475-4349-8aaa-47c8b6caa08e-cert") pod "openstack-operator-controller-manager-7f4bc68b84-vxcp7" (UID: "2ec76c17-6475-4349-8aaa-47c8b6caa08e") : secret "webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.804371 4772 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: E1122 10:56:15.804400 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert podName:ed653c9f-8aac-4989-bc46-169893057f90 nodeName:}" failed. No retries permitted until 2025-11-22 10:56:16.804390794 +0000 UTC m=+1097.043835288 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert") pod "openstack-baremetal-operator-controller-manager-77868f484-nn7dj" (UID: "ed653c9f-8aac-4989-bc46-169893057f90") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.815906 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.833440 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdpzt\" (UniqueName: \"kubernetes.io/projected/2ec76c17-6475-4349-8aaa-47c8b6caa08e-kube-api-access-hdpzt\") pod \"openstack-operator-controller-manager-7f4bc68b84-vxcp7\" (UID: \"2ec76c17-6475-4349-8aaa-47c8b6caa08e\") " pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.846650 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.851296 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.860777 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64dg2\" (UniqueName: \"kubernetes.io/projected/e7a30218-add6-4170-948c-b6b9f8b960c8-kube-api-access-64dg2\") pod \"watcher-operator-controller-manager-6b56b8849f-wblft\" (UID: \"e7a30218-add6-4170-948c-b6b9f8b960c8\") " pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.873804 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb"] Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.875188 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.904808 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwgv7\" (UniqueName: \"kubernetes.io/projected/7823e2ab-1a7a-4a3f-9749-04c705f4336e-kube-api-access-xwgv7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw\" (UID: \"7823e2ab-1a7a-4a3f-9749-04c705f4336e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.932373 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwgv7\" (UniqueName: \"kubernetes.io/projected/7823e2ab-1a7a-4a3f-9749-04c705f4336e-kube-api-access-xwgv7\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw\" (UID: \"7823e2ab-1a7a-4a3f-9749-04c705f4336e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" Nov 22 10:56:15 crc kubenswrapper[4772]: I1122 10:56:15.980020 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.187456 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.309905 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec76c17-6475-4349-8aaa-47c8b6caa08e-cert\") pod \"openstack-operator-controller-manager-7f4bc68b84-vxcp7\" (UID: \"2ec76c17-6475-4349-8aaa-47c8b6caa08e\") " pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.330969 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec76c17-6475-4349-8aaa-47c8b6caa08e-cert\") pod \"openstack-operator-controller-manager-7f4bc68b84-vxcp7\" (UID: \"2ec76c17-6475-4349-8aaa-47c8b6caa08e\") " pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.379896 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.548161 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z"] Nov 22 10:56:16 crc kubenswrapper[4772]: W1122 10:56:16.553576 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8044f4e1_088e_4e18_a9c4_a35265e4b62a.slice/crio-2f520112ff29a50eebf962604a1fc14fc10b2279422a3a779b6b716be1307526 WatchSource:0}: Error finding container 2f520112ff29a50eebf962604a1fc14fc10b2279422a3a779b6b716be1307526: Status 404 returned error can't find the container with id 2f520112ff29a50eebf962604a1fc14fc10b2279422a3a779b6b716be1307526 Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.554977 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.574764 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.581775 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.601596 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.612221 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.615552 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" event={"ID":"19381059-85ea-461e-baca-f0f511fdb677","Type":"ContainerStarted","Data":"1982edf93572432b4e8358ed8fd34cbb90521e69a84514ed803a25de0f1db0fc"} Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.636376 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" event={"ID":"66c75ee1-78ca-448b-a2f0-0946014f82ff","Type":"ContainerStarted","Data":"55d9cda022a97ed01efaf8c7e7fcd5e8b570673eb3eabb98d6cdaf762869490e"} Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.641838 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.642192 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" event={"ID":"db103267-50bd-4819-a33e-90a787ddb249","Type":"ContainerStarted","Data":"317a2f29207aad84fb3bef2e27e86edda7caee19b74113d72da1de9c22a2ba7f"} Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.645348 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" event={"ID":"ede58bb8-1f1d-4637-a89a-5075266ea932","Type":"ContainerStarted","Data":"a18534f8bc6ee741b247382835718086b92eae841fc95d73823f9801043c46dd"} Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.646682 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-wwt28"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.652766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" event={"ID":"2abf13f1-7bd8-4ea6-85a3-5c5658de7f48","Type":"ContainerStarted","Data":"b2bd77395fd57c5e180a9cfabec68c8b994f16fd0988d6a22b61d16555d65959"} Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.655411 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" event={"ID":"8085093b-cd6b-4ef5-9935-82eb224499c2","Type":"ContainerStarted","Data":"8ec3677cd8e010d19c8f0853cac379dcb88d4c4590475bb87bbed809b72e3a43"} Nov 22 10:56:16 crc kubenswrapper[4772]: W1122 10:56:16.661060 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58546700_44a8_45ad_bbbc_1ee40a090fd7.slice/crio-4a08e1506f5d4fb9f61766a70d38dd93572d379228e70c4a02488cc80b8b3cff WatchSource:0}: Error finding container 4a08e1506f5d4fb9f61766a70d38dd93572d379228e70c4a02488cc80b8b3cff: Status 404 returned error can't find the container with id 4a08e1506f5d4fb9f61766a70d38dd93572d379228e70c4a02488cc80b8b3cff Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.661110 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.666193 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.666746 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" event={"ID":"b90351ff-d9b5-4d42-b7ef-915a5bd4251d","Type":"ContainerStarted","Data":"e59876f7a491a26063c3f88be03d4861b82511b2815b3a8ab07528f092cb79ef"} Nov 22 10:56:16 crc kubenswrapper[4772]: W1122 10:56:16.668799 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c6d87d_3d72_43b9_a56e_bb94322cc856.slice/crio-450ca3ff6ca457905d476aa1301ad4a2c67281f3c40f1252b9e0b8b9a56f7ce8 WatchSource:0}: Error finding container 450ca3ff6ca457905d476aa1301ad4a2c67281f3c40f1252b9e0b8b9a56f7ce8: Status 404 returned error can't find the container with id 450ca3ff6ca457905d476aa1301ad4a2c67281f3c40f1252b9e0b8b9a56f7ce8 Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.670673 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" event={"ID":"09182dc6-4ee5-4ad8-9298-f13a7037ac9b","Type":"ContainerStarted","Data":"a8a472f5d8b339b6dcded33af8b17769907db1b087f8b0942693543278ef6aac"} Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.670743 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.672855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" event={"ID":"8044f4e1-088e-4e18-a9c4-a35265e4b62a","Type":"ContainerStarted","Data":"2f520112ff29a50eebf962604a1fc14fc10b2279422a3a779b6b716be1307526"} Nov 22 10:56:16 crc kubenswrapper[4772]: E1122 10:56:16.675405 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:8aaaf8bb0a81358ee196af922d534c9b3f6bb47b27f4283087f7e0254638a671,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bwmf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6788cc6d75-9bnwv_openstack-operators(89c6d87d-3d72-43b9-a56e-bb94322cc856): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.808663 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.819106 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:16 crc kubenswrapper[4772]: W1122 10:56:16.824295 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7823e2ab_1a7a_4a3f_9749_04c705f4336e.slice/crio-8688597ac0fab66f99309c013e02d7a4ae0bb6e7a6a1e0a1934b675418f3b84e WatchSource:0}: Error finding container 8688597ac0fab66f99309c013e02d7a4ae0bb6e7a6a1e0a1934b675418f3b84e: Status 404 returned error can't find the container with id 8688597ac0fab66f99309c013e02d7a4ae0bb6e7a6a1e0a1934b675418f3b84e Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.835516 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed653c9f-8aac-4989-bc46-169893057f90-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-nn7dj\" (UID: \"ed653c9f-8aac-4989-bc46-169893057f90\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:16 crc kubenswrapper[4772]: W1122 10:56:16.835628 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc5e47a3_441e_4076_b726_99bb8cd36d95.slice/crio-c2620fbbfa4ae3585cd8f6caa8ce31fb65319e4c7edaa909b90461a8a313bf72 WatchSource:0}: Error finding container c2620fbbfa4ae3585cd8f6caa8ce31fb65319e4c7edaa909b90461a8a313bf72: Status 404 returned error can't find the container with id c2620fbbfa4ae3585cd8f6caa8ce31fb65319e4c7edaa909b90461a8a313bf72 Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.841470 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd"] Nov 22 10:56:16 crc kubenswrapper[4772]: E1122 10:56:16.842349 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:d1fab4998e5f0faf94295eeaebfbf6801921d50497fbfc5331a888b207831486,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-687rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-646fd589f9-jpzwd_openstack-operators(bc5e47a3-441e-4076-b726-99bb8cd36d95): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.851453 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p"] Nov 22 10:56:16 crc kubenswrapper[4772]: W1122 10:56:16.852795 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f92807d_a811_4354_a9b3_4efe75db8096.slice/crio-eaa010c490f895f61e52c5dc98cff8cdf392ebbb0f7c371c7ce24fe4ac8562b7 WatchSource:0}: Error finding container eaa010c490f895f61e52c5dc98cff8cdf392ebbb0f7c371c7ce24fe4ac8562b7: Status 404 returned error can't find the container with id eaa010c490f895f61e52c5dc98cff8cdf392ebbb0f7c371c7ce24fe4ac8562b7 Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.856269 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx"] Nov 22 10:56:16 crc kubenswrapper[4772]: E1122 10:56:16.858671 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:5245e851b4476baecd4173eca3e8669ac09ec69d36ad1ebc3a0f867713cbc14b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8w9gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-d5fb87cb8-v8z2p_openstack-operators(7f92807d-a811-4354-a9b3-4efe75db8096): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.862740 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft"] Nov 22 10:56:16 crc kubenswrapper[4772]: E1122 10:56:16.864642 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b6gb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-77db6bf9c-2d4vx_openstack-operators(bc363239-f347-4379-8fb3-e499b555a263): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.869074 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m"] Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.873752 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz"] Nov 22 10:56:16 crc kubenswrapper[4772]: E1122 10:56:16.875875 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8dsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79d658b66d-cgk6m_openstack-operators(1d228e15-a43a-4b2a-b8a5-958a6ce484a7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 10:56:16 crc kubenswrapper[4772]: E1122 10:56:16.876494 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scpzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-8f6687c44-zv2pz_openstack-operators(0e43cbd9-c19e-4747-b833-39529cfa3d9d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 10:56:16 crc kubenswrapper[4772]: E1122 10:56:16.876566 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64dg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6b56b8849f-wblft_openstack-operators(e7a30218-add6-4170-948c-b6b9f8b960c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.958631 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:16 crc kubenswrapper[4772]: I1122 10:56:16.997535 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7"] Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.214228 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj"] Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.256660 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" podUID="1d228e15-a43a-4b2a-b8a5-958a6ce484a7" Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.256800 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" podUID="e7a30218-add6-4170-948c-b6b9f8b960c8" Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.257998 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" podUID="bc363239-f347-4379-8fb3-e499b555a263" Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.258066 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" podUID="89c6d87d-3d72-43b9-a56e-bb94322cc856" Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.258258 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" podUID="bc5e47a3-441e-4076-b726-99bb8cd36d95" Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.258390 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" podUID="7f92807d-a811-4354-a9b3-4efe75db8096" Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.280238 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" podUID="0e43cbd9-c19e-4747-b833-39529cfa3d9d" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.708899 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" event={"ID":"0e43cbd9-c19e-4747-b833-39529cfa3d9d","Type":"ContainerStarted","Data":"e084915c010c84546acf5d3517bf5710fa2b7e70ae9327ae492b7b0f02575b4f"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.709279 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" event={"ID":"0e43cbd9-c19e-4747-b833-39529cfa3d9d","Type":"ContainerStarted","Data":"5dcad4f01baaf78f4f352bc6fe4b0b169386487ead12336545a8024550b79346"} Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.711370 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907\\\"\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" podUID="0e43cbd9-c19e-4747-b833-39529cfa3d9d" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.747670 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" event={"ID":"bc5e47a3-441e-4076-b726-99bb8cd36d95","Type":"ContainerStarted","Data":"faa56986ede73e2f84397a3ca1fb092096de2d27234e9d069fddcd642c91b9ae"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.747721 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" event={"ID":"bc5e47a3-441e-4076-b726-99bb8cd36d95","Type":"ContainerStarted","Data":"c2620fbbfa4ae3585cd8f6caa8ce31fb65319e4c7edaa909b90461a8a313bf72"} Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.759302 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:d1fab4998e5f0faf94295eeaebfbf6801921d50497fbfc5331a888b207831486\\\"\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" podUID="bc5e47a3-441e-4076-b726-99bb8cd36d95" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.763247 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" event={"ID":"1d228e15-a43a-4b2a-b8a5-958a6ce484a7","Type":"ContainerStarted","Data":"a40f19392ffc45308049250686c676ab395367bc9bb68588c584a772516d50d4"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.763289 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" event={"ID":"1d228e15-a43a-4b2a-b8a5-958a6ce484a7","Type":"ContainerStarted","Data":"52fd9b3d3e825444f49c4b5f73daec18ce0acdc1f32f1e4970d37f9ebd20d4bd"} Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.768152 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" podUID="1d228e15-a43a-4b2a-b8a5-958a6ce484a7" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.777668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" event={"ID":"e7a30218-add6-4170-948c-b6b9f8b960c8","Type":"ContainerStarted","Data":"c684a01c24fa9cf192ae21b7159a4c3a01c1d16a18afe1365d0a1ce369bec702"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.777745 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" event={"ID":"e7a30218-add6-4170-948c-b6b9f8b960c8","Type":"ContainerStarted","Data":"02f396fb3017dd073397e4ba8f8f350808be318df1dd40b66dd0d9b0078f2e08"} Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.780593 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" podUID="e7a30218-add6-4170-948c-b6b9f8b960c8" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.805804 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" event={"ID":"58546700-44a8-45ad-bbbc-1ee40a090fd7","Type":"ContainerStarted","Data":"4a08e1506f5d4fb9f61766a70d38dd93572d379228e70c4a02488cc80b8b3cff"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.846630 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" event={"ID":"ed653c9f-8aac-4989-bc46-169893057f90","Type":"ContainerStarted","Data":"cb9eaaff93636676ea7dcf9942e5d1f4c0aa29fb952a9a748539631276252ab0"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.853987 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" event={"ID":"89c6d87d-3d72-43b9-a56e-bb94322cc856","Type":"ContainerStarted","Data":"9c09b07e67b257f4692840bf404ab3e33fe84fc736f28316c717d31520197d00"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.854040 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" event={"ID":"89c6d87d-3d72-43b9-a56e-bb94322cc856","Type":"ContainerStarted","Data":"450ca3ff6ca457905d476aa1301ad4a2c67281f3c40f1252b9e0b8b9a56f7ce8"} Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.855958 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:8aaaf8bb0a81358ee196af922d534c9b3f6bb47b27f4283087f7e0254638a671\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" podUID="89c6d87d-3d72-43b9-a56e-bb94322cc856" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.858904 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" event={"ID":"bc363239-f347-4379-8fb3-e499b555a263","Type":"ContainerStarted","Data":"c182a250969c8441e0d2c4f9b5b51ad3696935ca958d99bb9a47733f2c866fa2"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.858958 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" event={"ID":"bc363239-f347-4379-8fb3-e499b555a263","Type":"ContainerStarted","Data":"25eda5c87870446e09abd836b160f24e44529e4d6dae2cf63eb97bd60fc92450"} Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.861419 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85\\\"\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" podUID="bc363239-f347-4379-8fb3-e499b555a263" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.883173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" event={"ID":"7f92807d-a811-4354-a9b3-4efe75db8096","Type":"ContainerStarted","Data":"2bf679f806cec23e4efd60445160f67a9cbf7dc04e4537fa91c1e4e133bd534a"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.883219 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" event={"ID":"7f92807d-a811-4354-a9b3-4efe75db8096","Type":"ContainerStarted","Data":"eaa010c490f895f61e52c5dc98cff8cdf392ebbb0f7c371c7ce24fe4ac8562b7"} Nov 22 10:56:17 crc kubenswrapper[4772]: E1122 10:56:17.884926 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:5245e851b4476baecd4173eca3e8669ac09ec69d36ad1ebc3a0f867713cbc14b\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" podUID="7f92807d-a811-4354-a9b3-4efe75db8096" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.915490 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" event={"ID":"2ec76c17-6475-4349-8aaa-47c8b6caa08e","Type":"ContainerStarted","Data":"7735b4f1e24a8e68808738001213afc8279e592e6c9396dfa3527aa21b1020b8"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.915544 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" event={"ID":"2ec76c17-6475-4349-8aaa-47c8b6caa08e","Type":"ContainerStarted","Data":"2571917ec301c5ae63f4bc373608c42dea3a31ed59114ddacb173ddf031fe492"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.915555 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" event={"ID":"2ec76c17-6475-4349-8aaa-47c8b6caa08e","Type":"ContainerStarted","Data":"8e575a33abc2ced374c4f2f563241da111eec0c3149e778fcff0223d5225006a"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.916486 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.934933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" event={"ID":"1037af09-c926-409c-9732-26cf293cc210","Type":"ContainerStarted","Data":"0da3948ab9a9720a83f290a9bb6b55ff8fe7b006b8bed83c10a8ae3f85ab8858"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.937935 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" event={"ID":"7823e2ab-1a7a-4a3f-9749-04c705f4336e","Type":"ContainerStarted","Data":"8688597ac0fab66f99309c013e02d7a4ae0bb6e7a6a1e0a1934b675418f3b84e"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.939714 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" event={"ID":"8456f23d-dc5a-4ddf-b853-46fcc56593e8","Type":"ContainerStarted","Data":"a3009b087554e46cd7c1c2876af8a781275104162481b34809003b2b11453acb"} Nov 22 10:56:17 crc kubenswrapper[4772]: I1122 10:56:17.941300 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" event={"ID":"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9","Type":"ContainerStarted","Data":"2d8a27f8cd8e0a79e9491ad7f6521723655883c3099fcb2a1d200da664a678d5"} Nov 22 10:56:18 crc kubenswrapper[4772]: E1122 10:56:18.956471 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:d1fab4998e5f0faf94295eeaebfbf6801921d50497fbfc5331a888b207831486\\\"\"" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" podUID="bc5e47a3-441e-4076-b726-99bb8cd36d95" Nov 22 10:56:18 crc kubenswrapper[4772]: E1122 10:56:18.957108 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" podUID="1d228e15-a43a-4b2a-b8a5-958a6ce484a7" Nov 22 10:56:18 crc kubenswrapper[4772]: E1122 10:56:18.957198 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:624b77b1b44f5e72a6c7d5910b04eb8070c499f83dcf364fb9dc5f2f8cb83c85\\\"\"" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" podUID="bc363239-f347-4379-8fb3-e499b555a263" Nov 22 10:56:18 crc kubenswrapper[4772]: E1122 10:56:18.963020 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:5245e851b4476baecd4173eca3e8669ac09ec69d36ad1ebc3a0f867713cbc14b\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" podUID="7f92807d-a811-4354-a9b3-4efe75db8096" Nov 22 10:56:18 crc kubenswrapper[4772]: E1122 10:56:18.963320 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907\\\"\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" podUID="0e43cbd9-c19e-4747-b833-39529cfa3d9d" Nov 22 10:56:18 crc kubenswrapper[4772]: E1122 10:56:18.963339 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" podUID="e7a30218-add6-4170-948c-b6b9f8b960c8" Nov 22 10:56:18 crc kubenswrapper[4772]: E1122 10:56:18.963416 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:8aaaf8bb0a81358ee196af922d534c9b3f6bb47b27f4283087f7e0254638a671\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" podUID="89c6d87d-3d72-43b9-a56e-bb94322cc856" Nov 22 10:56:18 crc kubenswrapper[4772]: I1122 10:56:18.970868 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" podStartSLOduration=3.9708460949999997 podStartE2EDuration="3.970846095s" podCreationTimestamp="2025-11-22 10:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:56:17.96687193 +0000 UTC m=+1098.206316424" watchObservedRunningTime="2025-11-22 10:56:18.970846095 +0000 UTC m=+1099.210290579" Nov 22 10:56:26 crc kubenswrapper[4772]: I1122 10:56:26.386793 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7f4bc68b84-vxcp7" Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.027785 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" event={"ID":"8044f4e1-088e-4e18-a9c4-a35265e4b62a","Type":"ContainerStarted","Data":"1255a774de6a7ef0dd18c5ab5408b56fc279a128a0478af10967bce40d0ed783"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.043353 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" event={"ID":"ed653c9f-8aac-4989-bc46-169893057f90","Type":"ContainerStarted","Data":"058cb230897d598549ee1ee3d0ee6c59218d5282537bf25bf953c5bc5ca948a0"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.051715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" event={"ID":"8085093b-cd6b-4ef5-9935-82eb224499c2","Type":"ContainerStarted","Data":"fd3922062f2f823ad9adf6e8c718dd613c631d124667db535b77ae2fe7c5998e"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.059606 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" event={"ID":"09182dc6-4ee5-4ad8-9298-f13a7037ac9b","Type":"ContainerStarted","Data":"7c0041922316637e59d82b12bfd4e64ee0cfe5aec4a054a1ae54112c5a8edf6d"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.071096 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" event={"ID":"8456f23d-dc5a-4ddf-b853-46fcc56593e8","Type":"ContainerStarted","Data":"55724e793a495669068405cc288fca47198dd06487f63d60a0e1959a9ace3be0"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.081310 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" event={"ID":"2abf13f1-7bd8-4ea6-85a3-5c5658de7f48","Type":"ContainerStarted","Data":"cbc003754262e9e45c205eb8ed7ff0aabfdc404e05da3949c80abec7a2eff16b"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.091023 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" event={"ID":"19381059-85ea-461e-baca-f0f511fdb677","Type":"ContainerStarted","Data":"d902f7600cc1528792dc5964324b1bfb0a38bd48d9d0a8b9d605b67de883497f"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.109698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" event={"ID":"db103267-50bd-4819-a33e-90a787ddb249","Type":"ContainerStarted","Data":"bfe8dde87fc80868f40587c7ecbc19cc9bab6ae477a59f2aefba9e1a7d1c80fe"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.115771 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" event={"ID":"7823e2ab-1a7a-4a3f-9749-04c705f4336e","Type":"ContainerStarted","Data":"915e287fb3b1576288bb4ec5fbf6f04f1ebb47b89b48299b483553fe03f03f7f"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.133643 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" event={"ID":"ede58bb8-1f1d-4637-a89a-5075266ea932","Type":"ContainerStarted","Data":"e32fe14859147e2ede584ca9babc2a051a52b3944af835a48fb7072aec00e5aa"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.141146 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" event={"ID":"58546700-44a8-45ad-bbbc-1ee40a090fd7","Type":"ContainerStarted","Data":"9c17c6a19ba24373d79de56faab0eecaee63686cc2d66a3c7752b6cc34d0ef18"} Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.142228 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw" podStartSLOduration=2.444322376 podStartE2EDuration="14.142210183s" podCreationTimestamp="2025-11-22 10:56:15 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.83888406 +0000 UTC m=+1097.078328554" lastFinishedPulling="2025-11-22 10:56:28.536771867 +0000 UTC m=+1108.776216361" observedRunningTime="2025-11-22 10:56:29.139939515 +0000 UTC m=+1109.379384019" watchObservedRunningTime="2025-11-22 10:56:29.142210183 +0000 UTC m=+1109.381654677" Nov 22 10:56:29 crc kubenswrapper[4772]: I1122 10:56:29.155307 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" event={"ID":"b90351ff-d9b5-4d42-b7ef-915a5bd4251d","Type":"ContainerStarted","Data":"c4018d4f48d21e88f59d05f09c8c3c40cd95c2b926beaac5b3c46d959d587dac"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.186384 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" event={"ID":"ed653c9f-8aac-4989-bc46-169893057f90","Type":"ContainerStarted","Data":"14461bf0d667f41d6ec89cb6f0adc9a16d5f6ff57ba87ce2d57d3854867c7754"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.186886 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.205344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" event={"ID":"db103267-50bd-4819-a33e-90a787ddb249","Type":"ContainerStarted","Data":"ce099a4395a2e2b9ebf86ae15ca328c699e794fa6e215895c64c7f27007b8b43"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.205996 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.207997 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" event={"ID":"09182dc6-4ee5-4ad8-9298-f13a7037ac9b","Type":"ContainerStarted","Data":"42eda226a114aef06944096a192dd41181f1cfe5d4d08fae34c1d037403df36d"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.208354 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.211806 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" event={"ID":"8456f23d-dc5a-4ddf-b853-46fcc56593e8","Type":"ContainerStarted","Data":"372fa66e4d2c9cffd8638b615b6cfb802bb7e97d52fcf0092d34bd958be9b15b"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.212215 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.213859 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" event={"ID":"b90351ff-d9b5-4d42-b7ef-915a5bd4251d","Type":"ContainerStarted","Data":"10bb3a679a21fa18bac730545a379f931bb8c603c105eb871dfd2ab673fd7d47"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.214242 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.219882 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" event={"ID":"66c75ee1-78ca-448b-a2f0-0946014f82ff","Type":"ContainerStarted","Data":"19f95ec4e7b471ce050aa130fe6b28b00fe2b7201e30b388c7124667a8a39d47"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.219923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" event={"ID":"66c75ee1-78ca-448b-a2f0-0946014f82ff","Type":"ContainerStarted","Data":"da3c80bb54c2694f6d0ac960b3a563b4887c7029d237013575fb5bc90d099c6e"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.220528 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.223701 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" podStartSLOduration=5.006155359 podStartE2EDuration="16.223685432s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:17.23664366 +0000 UTC m=+1097.476088154" lastFinishedPulling="2025-11-22 10:56:28.454173733 +0000 UTC m=+1108.693618227" observedRunningTime="2025-11-22 10:56:30.220781567 +0000 UTC m=+1110.460226061" watchObservedRunningTime="2025-11-22 10:56:30.223685432 +0000 UTC m=+1110.463129916" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.231629 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" event={"ID":"ede58bb8-1f1d-4637-a89a-5075266ea932","Type":"ContainerStarted","Data":"0f638e3f2fdf8154fb4ebf8d2af9d9522acfbabf29424f149f40098d82daa2fa"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.232300 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.234024 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" event={"ID":"8085093b-cd6b-4ef5-9935-82eb224499c2","Type":"ContainerStarted","Data":"259d1ed4e3a4a528cb52eda22a8e2645d26413b82f27c9ebb6f30c96b0ac04c2"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.234424 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.236137 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" event={"ID":"19381059-85ea-461e-baca-f0f511fdb677","Type":"ContainerStarted","Data":"06a29981e3993692897bcdcce4d6ace663728d7bc13c0b9fce28b26d711006e5"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.236540 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.239009 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" event={"ID":"1037af09-c926-409c-9732-26cf293cc210","Type":"ContainerStarted","Data":"8d6df7ee464467819dffcfa2592d0571f31f0472a68af81e650a167dd417910f"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.239037 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.240559 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" event={"ID":"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9","Type":"ContainerStarted","Data":"17e5ef1220b92c0372e68c637451f8876a72bb5fbd3bfd80fd9fee8e58e2618a"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.240586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" event={"ID":"7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9","Type":"ContainerStarted","Data":"94cb5f208c9dc92472d01d219f9d901ae6fead748c877120bcaf287cdf02dc2d"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.240967 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.243512 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" event={"ID":"8044f4e1-088e-4e18-a9c4-a35265e4b62a","Type":"ContainerStarted","Data":"05d923f1dcb15baa99dfec85f519d20c536bd193afc144a6f7fbd18c9806e834"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.244193 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.247764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" event={"ID":"58546700-44a8-45ad-bbbc-1ee40a090fd7","Type":"ContainerStarted","Data":"c65a926e6b8a289b14ff036de397c8cf4abc1122d30d59a34dc7ae8007ec8cd4"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.248346 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.250507 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" event={"ID":"2abf13f1-7bd8-4ea6-85a3-5c5658de7f48","Type":"ContainerStarted","Data":"4a2c28d8c78ce6637e1a68c27645314ddb86adea4ef4b0b5b9f723edfc90b407"} Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.250536 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.252736 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" podStartSLOduration=4.378370432 podStartE2EDuration="16.252716255s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.596821195 +0000 UTC m=+1096.836265689" lastFinishedPulling="2025-11-22 10:56:28.471167018 +0000 UTC m=+1108.710611512" observedRunningTime="2025-11-22 10:56:30.249617875 +0000 UTC m=+1110.489062369" watchObservedRunningTime="2025-11-22 10:56:30.252716255 +0000 UTC m=+1110.492160749" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.273928 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" podStartSLOduration=4.386570621 podStartE2EDuration="16.273911087s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.584799957 +0000 UTC m=+1096.824244451" lastFinishedPulling="2025-11-22 10:56:28.472140423 +0000 UTC m=+1108.711584917" observedRunningTime="2025-11-22 10:56:30.273282341 +0000 UTC m=+1110.512726835" watchObservedRunningTime="2025-11-22 10:56:30.273911087 +0000 UTC m=+1110.513355581" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.303436 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" podStartSLOduration=3.486866018 podStartE2EDuration="15.303419362s" podCreationTimestamp="2025-11-22 10:56:15 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.654382128 +0000 UTC m=+1096.893826622" lastFinishedPulling="2025-11-22 10:56:28.470935472 +0000 UTC m=+1108.710379966" observedRunningTime="2025-11-22 10:56:30.297243654 +0000 UTC m=+1110.536688148" watchObservedRunningTime="2025-11-22 10:56:30.303419362 +0000 UTC m=+1110.542863846" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.326471 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" podStartSLOduration=3.8468966 podStartE2EDuration="16.326456362s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:15.974505058 +0000 UTC m=+1096.213949552" lastFinishedPulling="2025-11-22 10:56:28.4540648 +0000 UTC m=+1108.693509314" observedRunningTime="2025-11-22 10:56:30.321997218 +0000 UTC m=+1110.561441712" watchObservedRunningTime="2025-11-22 10:56:30.326456362 +0000 UTC m=+1110.565900856" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.379701 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" podStartSLOduration=3.825110602 podStartE2EDuration="16.379677714s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:15.916696789 +0000 UTC m=+1096.156141283" lastFinishedPulling="2025-11-22 10:56:28.471263901 +0000 UTC m=+1108.710708395" observedRunningTime="2025-11-22 10:56:30.376547534 +0000 UTC m=+1110.615992028" watchObservedRunningTime="2025-11-22 10:56:30.379677714 +0000 UTC m=+1110.619122208" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.382406 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" podStartSLOduration=4.486421247 podStartE2EDuration="16.382393754s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.609663223 +0000 UTC m=+1096.849107717" lastFinishedPulling="2025-11-22 10:56:28.50563574 +0000 UTC m=+1108.745080224" observedRunningTime="2025-11-22 10:56:30.346029393 +0000 UTC m=+1110.585473887" watchObservedRunningTime="2025-11-22 10:56:30.382393754 +0000 UTC m=+1110.621838248" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.415431 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" podStartSLOduration=4.613208483 podStartE2EDuration="16.415412359s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.669332251 +0000 UTC m=+1096.908776745" lastFinishedPulling="2025-11-22 10:56:28.471536117 +0000 UTC m=+1108.710980621" observedRunningTime="2025-11-22 10:56:30.401757789 +0000 UTC m=+1110.641202283" watchObservedRunningTime="2025-11-22 10:56:30.415412359 +0000 UTC m=+1110.654856863" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.430307 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" podStartSLOduration=4.535713868 podStartE2EDuration="16.430288219s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.55874868 +0000 UTC m=+1096.798193174" lastFinishedPulling="2025-11-22 10:56:28.453323031 +0000 UTC m=+1108.692767525" observedRunningTime="2025-11-22 10:56:30.429059558 +0000 UTC m=+1110.668504062" watchObservedRunningTime="2025-11-22 10:56:30.430288219 +0000 UTC m=+1110.669732713" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.455592 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" podStartSLOduration=4.6559722059999995 podStartE2EDuration="16.455572826s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.65446607 +0000 UTC m=+1096.893910564" lastFinishedPulling="2025-11-22 10:56:28.45406669 +0000 UTC m=+1108.693511184" observedRunningTime="2025-11-22 10:56:30.453914644 +0000 UTC m=+1110.693359138" watchObservedRunningTime="2025-11-22 10:56:30.455572826 +0000 UTC m=+1110.695017320" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.482926 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" podStartSLOduration=4.008015274 podStartE2EDuration="16.482896106s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:15.996282546 +0000 UTC m=+1096.235727030" lastFinishedPulling="2025-11-22 10:56:28.471163368 +0000 UTC m=+1108.710607862" observedRunningTime="2025-11-22 10:56:30.475099576 +0000 UTC m=+1110.714544070" watchObservedRunningTime="2025-11-22 10:56:30.482896106 +0000 UTC m=+1110.722340610" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.499385 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" podStartSLOduration=4.679662362 podStartE2EDuration="16.499370387s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.651476654 +0000 UTC m=+1096.890921148" lastFinishedPulling="2025-11-22 10:56:28.471184679 +0000 UTC m=+1108.710629173" observedRunningTime="2025-11-22 10:56:30.494176144 +0000 UTC m=+1110.733620638" watchObservedRunningTime="2025-11-22 10:56:30.499370387 +0000 UTC m=+1110.738814881" Nov 22 10:56:30 crc kubenswrapper[4772]: I1122 10:56:30.524047 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" podStartSLOduration=4.661117737 podStartE2EDuration="16.524024707s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.610021253 +0000 UTC m=+1096.849465747" lastFinishedPulling="2025-11-22 10:56:28.472928223 +0000 UTC m=+1108.712372717" observedRunningTime="2025-11-22 10:56:30.517954432 +0000 UTC m=+1110.757398926" watchObservedRunningTime="2025-11-22 10:56:30.524024707 +0000 UTC m=+1110.763469201" Nov 22 10:56:31 crc kubenswrapper[4772]: I1122 10:56:31.258799 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" event={"ID":"1037af09-c926-409c-9732-26cf293cc210","Type":"ContainerStarted","Data":"963786c0a0f6ae0f6d815c82f17f0785395fbe4256e63881ee4018fd72a54245"} Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.056540 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-748967c98-kx6h5" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.075578 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" podStartSLOduration=9.194713167 podStartE2EDuration="21.075561597s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.590354829 +0000 UTC m=+1096.829799323" lastFinishedPulling="2025-11-22 10:56:28.471203259 +0000 UTC m=+1108.710647753" observedRunningTime="2025-11-22 10:56:30.549744396 +0000 UTC m=+1110.789188900" watchObservedRunningTime="2025-11-22 10:56:35.075561597 +0000 UTC m=+1115.315006081" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.100415 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6f95d84fd6-bwjrb" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.120021 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-x46c7" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.212064 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-xk9c8" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.252699 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-8772n" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.276692 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-54485f899-nhqm7" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.346768 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-bjmld" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.372571 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-rvc7z" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.380304 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-7r9qs" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.431374 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-867d87977b-jxm8x" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.452239 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-58879495c-wwt28" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.818358 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-wr5mj" Nov 22 10:56:35 crc kubenswrapper[4772]: I1122 10:56:35.882282 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w8q6b" Nov 22 10:56:36 crc kubenswrapper[4772]: I1122 10:56:36.967908 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-nn7dj" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.379062 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" event={"ID":"bc5e47a3-441e-4076-b726-99bb8cd36d95","Type":"ContainerStarted","Data":"260ef1c37aea4aaa0c9627eef3470b206a7857d41907cff89cc22a661fa63733"} Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.379767 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.381280 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" event={"ID":"7f92807d-a811-4354-a9b3-4efe75db8096","Type":"ContainerStarted","Data":"6c5f6c2e2bfce40b72f1a11c4421828e948f25ade9902bf83f163d9cf564c6f9"} Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.381444 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.383919 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" event={"ID":"1d228e15-a43a-4b2a-b8a5-958a6ce484a7","Type":"ContainerStarted","Data":"7af50707f3c575e9a1aa04c8a540fde2c950001b66a98599cc504c4a75eee3bf"} Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.384144 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.385914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" event={"ID":"bc363239-f347-4379-8fb3-e499b555a263","Type":"ContainerStarted","Data":"155e4b0c8c2b51f57a79eb50098da3683bcdd8f71f7d0e666fc51ad461263617"} Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.386134 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.387886 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" event={"ID":"89c6d87d-3d72-43b9-a56e-bb94322cc856","Type":"ContainerStarted","Data":"a2bd549ed5807db071ae5c11a2650703ac820593bee0df8a3ede1129e9bbde5c"} Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.388001 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.389790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" event={"ID":"e7a30218-add6-4170-948c-b6b9f8b960c8","Type":"ContainerStarted","Data":"729bca713b5060fe964de6cd1f606dff1befcc846a41f7c86bef01e06681e52b"} Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.389957 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.391781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" event={"ID":"0e43cbd9-c19e-4747-b833-39529cfa3d9d","Type":"ContainerStarted","Data":"7ce2ae820d8eec059f4b3ccb8ff11f5c64ca11abc16836b4152da6ee59b13373"} Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.391946 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.403993 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" podStartSLOduration=3.657760648 podStartE2EDuration="33.403968994s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.842219475 +0000 UTC m=+1097.081663969" lastFinishedPulling="2025-11-22 10:56:46.588427821 +0000 UTC m=+1126.827872315" observedRunningTime="2025-11-22 10:56:47.401995414 +0000 UTC m=+1127.641439908" watchObservedRunningTime="2025-11-22 10:56:47.403968994 +0000 UTC m=+1127.643413498" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.419888 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" podStartSLOduration=3.707009128 podStartE2EDuration="33.419872151s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.875731263 +0000 UTC m=+1097.115175757" lastFinishedPulling="2025-11-22 10:56:46.588594286 +0000 UTC m=+1126.828038780" observedRunningTime="2025-11-22 10:56:47.417459319 +0000 UTC m=+1127.656903823" watchObservedRunningTime="2025-11-22 10:56:47.419872151 +0000 UTC m=+1127.659316645" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.432829 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" podStartSLOduration=2.708588798 podStartE2EDuration="32.432815812s" podCreationTimestamp="2025-11-22 10:56:15 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.864524486 +0000 UTC m=+1097.103968970" lastFinishedPulling="2025-11-22 10:56:46.58875149 +0000 UTC m=+1126.828195984" observedRunningTime="2025-11-22 10:56:47.430555755 +0000 UTC m=+1127.670000239" watchObservedRunningTime="2025-11-22 10:56:47.432815812 +0000 UTC m=+1127.672260306" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.447562 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" podStartSLOduration=3.511849964 podStartE2EDuration="33.447543379s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.675278043 +0000 UTC m=+1096.914722537" lastFinishedPulling="2025-11-22 10:56:46.610971458 +0000 UTC m=+1126.850415952" observedRunningTime="2025-11-22 10:56:47.44641197 +0000 UTC m=+1127.685856484" watchObservedRunningTime="2025-11-22 10:56:47.447543379 +0000 UTC m=+1127.686987863" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.485472 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" podStartSLOduration=2.773365748 podStartE2EDuration="32.4854574s" podCreationTimestamp="2025-11-22 10:56:15 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.876337149 +0000 UTC m=+1097.115781643" lastFinishedPulling="2025-11-22 10:56:46.588428801 +0000 UTC m=+1126.827873295" observedRunningTime="2025-11-22 10:56:47.465264103 +0000 UTC m=+1127.704708597" watchObservedRunningTime="2025-11-22 10:56:47.4854574 +0000 UTC m=+1127.724901894" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.487260 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" podStartSLOduration=3.75708191 podStartE2EDuration="33.487255466s" podCreationTimestamp="2025-11-22 10:56:14 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.858237005 +0000 UTC m=+1097.097681499" lastFinishedPulling="2025-11-22 10:56:46.588410561 +0000 UTC m=+1126.827855055" observedRunningTime="2025-11-22 10:56:47.482409722 +0000 UTC m=+1127.721854226" watchObservedRunningTime="2025-11-22 10:56:47.487255466 +0000 UTC m=+1127.726699960" Nov 22 10:56:47 crc kubenswrapper[4772]: I1122 10:56:47.505259 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" podStartSLOduration=2.785686582 podStartE2EDuration="32.505237576s" podCreationTimestamp="2025-11-22 10:56:15 +0000 UTC" firstStartedPulling="2025-11-22 10:56:16.876478582 +0000 UTC m=+1097.115923076" lastFinishedPulling="2025-11-22 10:56:46.596029576 +0000 UTC m=+1126.835474070" observedRunningTime="2025-11-22 10:56:47.502362632 +0000 UTC m=+1127.741807126" watchObservedRunningTime="2025-11-22 10:56:47.505237576 +0000 UTC m=+1127.744682070" Nov 22 10:56:55 crc kubenswrapper[4772]: I1122 10:56:55.373503 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-9bnwv" Nov 22 10:56:55 crc kubenswrapper[4772]: I1122 10:56:55.465515 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-646fd589f9-jpzwd" Nov 22 10:56:55 crc kubenswrapper[4772]: I1122 10:56:55.583441 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-v8z2p" Nov 22 10:56:55 crc kubenswrapper[4772]: I1122 10:56:55.659331 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-cgk6m" Nov 22 10:56:55 crc kubenswrapper[4772]: I1122 10:56:55.792095 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-zv2pz" Nov 22 10:56:55 crc kubenswrapper[4772]: I1122 10:56:55.855105 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-77db6bf9c-2d4vx" Nov 22 10:56:55 crc kubenswrapper[4772]: I1122 10:56:55.983400 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6b56b8849f-wblft" Nov 22 10:57:01 crc kubenswrapper[4772]: I1122 10:57:01.532878 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:57:01 crc kubenswrapper[4772]: I1122 10:57:01.533379 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.531585 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9rnjj"] Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.533667 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.541735 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.541851 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.542025 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7wb7r" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.542101 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.555848 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9rnjj"] Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.614493 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4jp\" (UniqueName: \"kubernetes.io/projected/502784b4-25b0-4473-9ec8-2ed73a0b34f2-kube-api-access-fc4jp\") pod \"dnsmasq-dns-675f4bcbfc-9rnjj\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.614545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502784b4-25b0-4473-9ec8-2ed73a0b34f2-config\") pod \"dnsmasq-dns-675f4bcbfc-9rnjj\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.615018 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6tpn"] Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.616424 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.619912 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.639013 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6tpn"] Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.716032 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-config\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.716259 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4f6\" (UniqueName: \"kubernetes.io/projected/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-kube-api-access-kg4f6\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.716376 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.716507 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4jp\" (UniqueName: \"kubernetes.io/projected/502784b4-25b0-4473-9ec8-2ed73a0b34f2-kube-api-access-fc4jp\") pod \"dnsmasq-dns-675f4bcbfc-9rnjj\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.716609 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502784b4-25b0-4473-9ec8-2ed73a0b34f2-config\") pod \"dnsmasq-dns-675f4bcbfc-9rnjj\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.717551 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502784b4-25b0-4473-9ec8-2ed73a0b34f2-config\") pod \"dnsmasq-dns-675f4bcbfc-9rnjj\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.744317 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4jp\" (UniqueName: \"kubernetes.io/projected/502784b4-25b0-4473-9ec8-2ed73a0b34f2-kube-api-access-fc4jp\") pod \"dnsmasq-dns-675f4bcbfc-9rnjj\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.818110 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-config\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.818432 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4f6\" (UniqueName: \"kubernetes.io/projected/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-kube-api-access-kg4f6\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.818452 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.819175 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-config\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.819184 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.834646 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4f6\" (UniqueName: \"kubernetes.io/projected/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-kube-api-access-kg4f6\") pod \"dnsmasq-dns-78dd6ddcc-g6tpn\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.861436 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:11 crc kubenswrapper[4772]: I1122 10:57:11.930756 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:12 crc kubenswrapper[4772]: I1122 10:57:12.317783 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9rnjj"] Nov 22 10:57:12 crc kubenswrapper[4772]: I1122 10:57:12.321580 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 10:57:12 crc kubenswrapper[4772]: I1122 10:57:12.395804 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6tpn"] Nov 22 10:57:12 crc kubenswrapper[4772]: W1122 10:57:12.398419 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f499ba6_29a1_45c0_b3c6_40576fa9f90b.slice/crio-3360d3bf6df816d033e3c810c6c57411f0679bac5e77ad41b08f4a7613053f7b WatchSource:0}: Error finding container 3360d3bf6df816d033e3c810c6c57411f0679bac5e77ad41b08f4a7613053f7b: Status 404 returned error can't find the container with id 3360d3bf6df816d033e3c810c6c57411f0679bac5e77ad41b08f4a7613053f7b Nov 22 10:57:12 crc kubenswrapper[4772]: I1122 10:57:12.576144 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" event={"ID":"3f499ba6-29a1-45c0-b3c6-40576fa9f90b","Type":"ContainerStarted","Data":"3360d3bf6df816d033e3c810c6c57411f0679bac5e77ad41b08f4a7613053f7b"} Nov 22 10:57:12 crc kubenswrapper[4772]: I1122 10:57:12.577701 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" event={"ID":"502784b4-25b0-4473-9ec8-2ed73a0b34f2","Type":"ContainerStarted","Data":"3c81211dac2378ad4d58fb066d3742e095b095b385ea1504510889596c462d9f"} Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.452079 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9rnjj"] Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.504724 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t5gk7"] Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.506212 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.531285 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t5gk7"] Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.653869 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-dns-svc\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.654107 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-config\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.654187 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69n4\" (UniqueName: \"kubernetes.io/projected/b480cbda-2219-4c23-a15c-abcc0a584417-kube-api-access-w69n4\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.755007 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-dns-svc\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.755094 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-config\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.755118 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69n4\" (UniqueName: \"kubernetes.io/projected/b480cbda-2219-4c23-a15c-abcc0a584417-kube-api-access-w69n4\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.756163 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-dns-svc\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.756674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-config\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.791096 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69n4\" (UniqueName: \"kubernetes.io/projected/b480cbda-2219-4c23-a15c-abcc0a584417-kube-api-access-w69n4\") pod \"dnsmasq-dns-666b6646f7-t5gk7\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.826405 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.906698 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6tpn"] Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.945411 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-stbd7"] Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.951342 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:14 crc kubenswrapper[4772]: I1122 10:57:14.996442 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-stbd7"] Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.072891 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.073266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz677\" (UniqueName: \"kubernetes.io/projected/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-kube-api-access-tz677\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.073305 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-config\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.179435 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.179479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz677\" (UniqueName: \"kubernetes.io/projected/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-kube-api-access-tz677\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.179543 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-config\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.180297 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.180569 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-config\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.205246 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz677\" (UniqueName: \"kubernetes.io/projected/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-kube-api-access-tz677\") pod \"dnsmasq-dns-57d769cc4f-stbd7\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.329485 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:15 crc kubenswrapper[4772]: W1122 10:57:15.394638 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb480cbda_2219_4c23_a15c_abcc0a584417.slice/crio-d6f256dd410fdb362e5061086b06689dce080883567bd7c5cee184efc4c50951 WatchSource:0}: Error finding container d6f256dd410fdb362e5061086b06689dce080883567bd7c5cee184efc4c50951: Status 404 returned error can't find the container with id d6f256dd410fdb362e5061086b06689dce080883567bd7c5cee184efc4c50951 Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.398648 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t5gk7"] Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.616290 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" event={"ID":"b480cbda-2219-4c23-a15c-abcc0a584417","Type":"ContainerStarted","Data":"d6f256dd410fdb362e5061086b06689dce080883567bd7c5cee184efc4c50951"} Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.713664 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.714893 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.723306 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.723605 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.723749 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.723882 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.724016 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.724181 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.724322 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qdkm8" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.731988 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791227 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791269 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791342 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791366 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791397 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791437 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791473 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791497 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791522 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.791539 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld7l6\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-kube-api-access-ld7l6\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.825282 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-stbd7"] Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893252 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893355 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893412 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893485 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893513 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893540 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld7l6\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-kube-api-access-ld7l6\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893588 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893616 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893646 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.893663 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.894074 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.894075 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.894268 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.895418 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.896576 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.902810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.906997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-server-conf\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.907166 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.908022 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.908498 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-pod-info\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.912802 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld7l6\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-kube-api-access-ld7l6\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:15 crc kubenswrapper[4772]: I1122 10:57:15.943567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " pod="openstack/rabbitmq-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.066203 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.068345 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.069549 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.072903 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.072910 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.072959 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.073104 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.074396 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.076398 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.076583 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4hgwj" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.084946 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199232 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199298 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199330 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199359 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199380 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfb6z\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-kube-api-access-mfb6z\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199409 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199484 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199529 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199560 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.199622 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.201951 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303343 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303664 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303693 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303715 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303748 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303769 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303795 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303814 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303831 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303847 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfb6z\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-kube-api-access-mfb6z\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.303867 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.304560 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.304850 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.305150 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.305644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.306842 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.307807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.313072 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.313096 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.313586 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.315973 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.320217 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfb6z\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-kube-api-access-mfb6z\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.337585 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.403181 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.593479 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.624515 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" event={"ID":"9cd13b77-de2f-4a35-8e87-88f2ed802fc5","Type":"ContainerStarted","Data":"41aa7a4e28fa9f3d55f240ceeda1aba389c4c2584048c846261290e883d94523"} Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.625824 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea","Type":"ContainerStarted","Data":"771eca904cfa49996304858353ea6f3211f3d2940eaef953cb857e5108369f92"} Nov 22 10:57:16 crc kubenswrapper[4772]: I1122 10:57:16.816981 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 10:57:16 crc kubenswrapper[4772]: W1122 10:57:16.831591 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ce19f6b_73e1_48b9_810a_f9d97a14fe7b.slice/crio-7dfba9745ae45e12db2392840e83c2d00d1c654dbb39032ca5fbad4ebe6fdf68 WatchSource:0}: Error finding container 7dfba9745ae45e12db2392840e83c2d00d1c654dbb39032ca5fbad4ebe6fdf68: Status 404 returned error can't find the container with id 7dfba9745ae45e12db2392840e83c2d00d1c654dbb39032ca5fbad4ebe6fdf68 Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.031327 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.032493 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.035375 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.035430 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.036338 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.039031 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-cvml9" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.040782 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.044684 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.054330 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122212 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122261 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9a852d5-2258-45b4-9076-95740059eecd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122355 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122411 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-secrets\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122465 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122561 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-config-data-default\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-kolla-config\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122694 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.122728 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfws4\" (UniqueName: \"kubernetes.io/projected/c9a852d5-2258-45b4-9076-95740059eecd-kube-api-access-vfws4\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224294 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224364 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-secrets\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224394 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-kolla-config\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224426 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-config-data-default\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224458 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224476 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfws4\" (UniqueName: \"kubernetes.io/projected/c9a852d5-2258-45b4-9076-95740059eecd-kube-api-access-vfws4\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224532 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9a852d5-2258-45b4-9076-95740059eecd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224837 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.224980 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9a852d5-2258-45b4-9076-95740059eecd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.225627 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-kolla-config\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.225769 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.226393 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-config-data-default\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.232976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.233252 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.236535 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-secrets\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.289075 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfws4\" (UniqueName: \"kubernetes.io/projected/c9a852d5-2258-45b4-9076-95740059eecd-kube-api-access-vfws4\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.293238 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.349429 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.632717 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b","Type":"ContainerStarted","Data":"7dfba9745ae45e12db2392840e83c2d00d1c654dbb39032ca5fbad4ebe6fdf68"} Nov 22 10:57:17 crc kubenswrapper[4772]: I1122 10:57:17.835679 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 10:57:17 crc kubenswrapper[4772]: W1122 10:57:17.842209 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a852d5_2258_45b4_9076_95740059eecd.slice/crio-7cfebb4275ff9eec3c648197da8639ff10c44b3148b4fdcfcb116d3a32506487 WatchSource:0}: Error finding container 7cfebb4275ff9eec3c648197da8639ff10c44b3148b4fdcfcb116d3a32506487: Status 404 returned error can't find the container with id 7cfebb4275ff9eec3c648197da8639ff10c44b3148b4fdcfcb116d3a32506487 Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.586375 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.587856 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.590502 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.590562 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.590843 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7g4l7" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.591830 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.596037 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.644057 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9a852d5-2258-45b4-9076-95740059eecd","Type":"ContainerStarted","Data":"7cfebb4275ff9eec3c648197da8639ff10c44b3148b4fdcfcb116d3a32506487"} Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.654037 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.654437 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.654574 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.654669 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.654764 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.654856 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.655028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2265k\" (UniqueName: \"kubernetes.io/projected/c4828519-a6ad-4851-b9c2-134a12f373ac-kube-api-access-2265k\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.655140 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.655235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758517 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758581 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758612 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758688 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2265k\" (UniqueName: \"kubernetes.io/projected/c4828519-a6ad-4851-b9c2-134a12f373ac-kube-api-access-2265k\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758855 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.758895 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.759619 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.760232 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.760315 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.760814 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.761017 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.768289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.768552 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.772499 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.779340 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2265k\" (UniqueName: \"kubernetes.io/projected/c4828519-a6ad-4851-b9c2-134a12f373ac-kube-api-access-2265k\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.793478 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.859306 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.862117 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.868263 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.868276 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.868718 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-g99kl" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.879706 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.920915 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.962941 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdh7\" (UniqueName: \"kubernetes.io/projected/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kube-api-access-wxdh7\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.963010 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.963093 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.963170 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kolla-config\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:18 crc kubenswrapper[4772]: I1122 10:57:18.963229 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-config-data\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.064490 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-config-data\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.065618 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdh7\" (UniqueName: \"kubernetes.io/projected/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kube-api-access-wxdh7\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.065774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.065907 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.066155 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kolla-config\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.065462 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-config-data\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.067329 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kolla-config\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.072661 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.073433 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.086990 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdh7\" (UniqueName: \"kubernetes.io/projected/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kube-api-access-wxdh7\") pod \"memcached-0\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " pod="openstack/memcached-0" Nov 22 10:57:19 crc kubenswrapper[4772]: I1122 10:57:19.185266 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.496968 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.499767 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.502236 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-fcj58" Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.509743 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.587316 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrjf\" (UniqueName: \"kubernetes.io/projected/f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd-kube-api-access-pnrjf\") pod \"kube-state-metrics-0\" (UID: \"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd\") " pod="openstack/kube-state-metrics-0" Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.688676 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrjf\" (UniqueName: \"kubernetes.io/projected/f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd-kube-api-access-pnrjf\") pod \"kube-state-metrics-0\" (UID: \"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd\") " pod="openstack/kube-state-metrics-0" Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.720725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrjf\" (UniqueName: \"kubernetes.io/projected/f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd-kube-api-access-pnrjf\") pod \"kube-state-metrics-0\" (UID: \"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd\") " pod="openstack/kube-state-metrics-0" Nov 22 10:57:20 crc kubenswrapper[4772]: I1122 10:57:20.842128 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.068321 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-267ms"] Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.070210 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.072885 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-mjrdk" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.073178 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.075229 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.082036 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qvtmm"] Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.083631 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.087559 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-267ms"] Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.123561 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qvtmm"] Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236513 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-etc-ovs\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236610 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-log\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236732 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-run\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236782 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236811 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqvzv\" (UniqueName: \"kubernetes.io/projected/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-kube-api-access-bqvzv\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-combined-ca-bundle\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236900 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25144c09-6edb-4bd3-89b2-99db486e733b-scripts\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236930 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-ovn-controller-tls-certs\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.236982 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-scripts\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.237006 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-lib\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.237127 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run-ovn\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.237158 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bfb\" (UniqueName: \"kubernetes.io/projected/25144c09-6edb-4bd3-89b2-99db486e733b-kube-api-access-c4bfb\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.237186 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-log-ovn\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339004 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run-ovn\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339067 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bfb\" (UniqueName: \"kubernetes.io/projected/25144c09-6edb-4bd3-89b2-99db486e733b-kube-api-access-c4bfb\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339099 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-log-ovn\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339150 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-etc-ovs\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-log\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339488 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-run\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339505 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339520 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqvzv\" (UniqueName: \"kubernetes.io/projected/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-kube-api-access-bqvzv\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339541 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-combined-ca-bundle\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339570 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25144c09-6edb-4bd3-89b2-99db486e733b-scripts\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339587 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-ovn-controller-tls-certs\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339608 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-scripts\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.339626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-lib\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.340822 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-etc-ovs\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.341029 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-run\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.341358 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-log-ovn\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.342263 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25144c09-6edb-4bd3-89b2-99db486e733b-scripts\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.342644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-scripts\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.342794 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run-ovn\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.342808 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-log\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.342842 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.342880 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-lib\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.344619 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-ovn-controller-tls-certs\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.356149 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-combined-ca-bundle\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.359467 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqvzv\" (UniqueName: \"kubernetes.io/projected/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-kube-api-access-bqvzv\") pod \"ovn-controller-ovs-qvtmm\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.360907 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bfb\" (UniqueName: \"kubernetes.io/projected/25144c09-6edb-4bd3-89b2-99db486e733b-kube-api-access-c4bfb\") pod \"ovn-controller-267ms\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.400647 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.416835 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.710274 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.712872 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.715211 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.715964 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.716179 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.716334 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pc4ms" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.716631 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.719763 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.848683 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.848762 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-config\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.848794 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.848822 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.849020 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdncq\" (UniqueName: \"kubernetes.io/projected/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-kube-api-access-cdncq\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.849116 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.849191 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.849461 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.950714 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.950786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.950840 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-config\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.950864 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.950902 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.950948 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdncq\" (UniqueName: \"kubernetes.io/projected/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-kube-api-access-cdncq\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.950988 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.951066 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.951260 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.951778 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.952015 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.952147 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-config\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.964737 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.965129 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.971240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:24 crc kubenswrapper[4772]: I1122 10:57:24.973821 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdncq\" (UniqueName: \"kubernetes.io/projected/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-kube-api-access-cdncq\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:25 crc kubenswrapper[4772]: I1122 10:57:25.019265 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:25 crc kubenswrapper[4772]: I1122 10:57:25.038407 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.137455 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.139374 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.145727 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dr662" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.146850 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.146882 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.146979 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.147242 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.306274 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.306365 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.306651 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d52kl\" (UniqueName: \"kubernetes.io/projected/62770fd6-1000-4477-ac95-7a4eaa489732-kube-api-access-d52kl\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.306805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-config\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.307028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.307088 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.307113 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.307176 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408406 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408474 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408495 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d52kl\" (UniqueName: \"kubernetes.io/projected/62770fd6-1000-4477-ac95-7a4eaa489732-kube-api-access-d52kl\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-config\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408597 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408619 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.408662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.409342 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.410129 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.410304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.410665 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-config\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.415548 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.416656 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.426854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d52kl\" (UniqueName: \"kubernetes.io/projected/62770fd6-1000-4477-ac95-7a4eaa489732-kube-api-access-d52kl\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.443025 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.449094 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:28 crc kubenswrapper[4772]: I1122 10:57:28.477992 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 10:57:31 crc kubenswrapper[4772]: E1122 10:57:31.256339 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 22 10:57:31 crc kubenswrapper[4772]: E1122 10:57:31.257778 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g6tpn_openstack(3f499ba6-29a1-45c0-b3c6-40576fa9f90b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 10:57:31 crc kubenswrapper[4772]: E1122 10:57:31.259111 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" podUID="3f499ba6-29a1-45c0-b3c6-40576fa9f90b" Nov 22 10:57:31 crc kubenswrapper[4772]: I1122 10:57:31.533214 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:57:31 crc kubenswrapper[4772]: I1122 10:57:31.533293 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.522561 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.583607 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-config\") pod \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.583910 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-dns-svc\") pod \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.583958 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg4f6\" (UniqueName: \"kubernetes.io/projected/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-kube-api-access-kg4f6\") pod \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\" (UID: \"3f499ba6-29a1-45c0-b3c6-40576fa9f90b\") " Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.584232 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-config" (OuterVolumeSpecName: "config") pod "3f499ba6-29a1-45c0-b3c6-40576fa9f90b" (UID: "3f499ba6-29a1-45c0-b3c6-40576fa9f90b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.584261 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f499ba6-29a1-45c0-b3c6-40576fa9f90b" (UID: "3f499ba6-29a1-45c0-b3c6-40576fa9f90b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.584438 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.584458 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.593722 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-kube-api-access-kg4f6" (OuterVolumeSpecName: "kube-api-access-kg4f6") pod "3f499ba6-29a1-45c0-b3c6-40576fa9f90b" (UID: "3f499ba6-29a1-45c0-b3c6-40576fa9f90b"). InnerVolumeSpecName "kube-api-access-kg4f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.686289 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg4f6\" (UniqueName: \"kubernetes.io/projected/3f499ba6-29a1-45c0-b3c6-40576fa9f90b-kube-api-access-kg4f6\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.820455 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" event={"ID":"3f499ba6-29a1-45c0-b3c6-40576fa9f90b","Type":"ContainerDied","Data":"3360d3bf6df816d033e3c810c6c57411f0679bac5e77ad41b08f4a7613053f7b"} Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.820544 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6tpn" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.894597 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6tpn"] Nov 22 10:57:32 crc kubenswrapper[4772]: E1122 10:57:32.901616 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 22 10:57:32 crc kubenswrapper[4772]: E1122 10:57:32.901841 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fc4jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9rnjj_openstack(502784b4-25b0-4473-9ec8-2ed73a0b34f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 10:57:32 crc kubenswrapper[4772]: E1122 10:57:32.905683 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" podUID="502784b4-25b0-4473-9ec8-2ed73a0b34f2" Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.921725 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6tpn"] Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.992165 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 10:57:32 crc kubenswrapper[4772]: I1122 10:57:32.998720 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 10:57:33 crc kubenswrapper[4772]: I1122 10:57:33.081769 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 10:57:33 crc kubenswrapper[4772]: I1122 10:57:33.361971 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-267ms"] Nov 22 10:57:33 crc kubenswrapper[4772]: I1122 10:57:33.366911 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 10:57:33 crc kubenswrapper[4772]: I1122 10:57:33.422096 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f499ba6-29a1-45c0-b3c6-40576fa9f90b" path="/var/lib/kubelet/pods/3f499ba6-29a1-45c0-b3c6-40576fa9f90b/volumes" Nov 22 10:57:33 crc kubenswrapper[4772]: I1122 10:57:33.443713 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 10:57:34 crc kubenswrapper[4772]: I1122 10:57:34.488735 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qvtmm"] Nov 22 10:57:35 crc kubenswrapper[4772]: W1122 10:57:35.438373 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fbd9ebd_2c62_4336_9946_792e4b3c83db.slice/crio-61f7b5d99d2452de6e4d8656605e6960b108cae6255016243acdf0a52cab0bb3 WatchSource:0}: Error finding container 61f7b5d99d2452de6e4d8656605e6960b108cae6255016243acdf0a52cab0bb3: Status 404 returned error can't find the container with id 61f7b5d99d2452de6e4d8656605e6960b108cae6255016243acdf0a52cab0bb3 Nov 22 10:57:35 crc kubenswrapper[4772]: W1122 10:57:35.442677 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62770fd6_1000_4477_ac95_7a4eaa489732.slice/crio-09491c40dac74dd8b3e4ad14b3220ce685e25da19709537bae0eb2fa8ee9cebb WatchSource:0}: Error finding container 09491c40dac74dd8b3e4ad14b3220ce685e25da19709537bae0eb2fa8ee9cebb: Status 404 returned error can't find the container with id 09491c40dac74dd8b3e4ad14b3220ce685e25da19709537bae0eb2fa8ee9cebb Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.643920 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.738242 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc4jp\" (UniqueName: \"kubernetes.io/projected/502784b4-25b0-4473-9ec8-2ed73a0b34f2-kube-api-access-fc4jp\") pod \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.740801 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502784b4-25b0-4473-9ec8-2ed73a0b34f2-config\") pod \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\" (UID: \"502784b4-25b0-4473-9ec8-2ed73a0b34f2\") " Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.741463 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502784b4-25b0-4473-9ec8-2ed73a0b34f2-config" (OuterVolumeSpecName: "config") pod "502784b4-25b0-4473-9ec8-2ed73a0b34f2" (UID: "502784b4-25b0-4473-9ec8-2ed73a0b34f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.742121 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502784b4-25b0-4473-9ec8-2ed73a0b34f2-kube-api-access-fc4jp" (OuterVolumeSpecName: "kube-api-access-fc4jp") pod "502784b4-25b0-4473-9ec8-2ed73a0b34f2" (UID: "502784b4-25b0-4473-9ec8-2ed73a0b34f2"). InnerVolumeSpecName "kube-api-access-fc4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.842820 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc4jp\" (UniqueName: \"kubernetes.io/projected/502784b4-25b0-4473-9ec8-2ed73a0b34f2-kube-api-access-fc4jp\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.842852 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/502784b4-25b0-4473-9ec8-2ed73a0b34f2-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.851126 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" event={"ID":"502784b4-25b0-4473-9ec8-2ed73a0b34f2","Type":"ContainerDied","Data":"3c81211dac2378ad4d58fb066d3742e095b095b385ea1504510889596c462d9f"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.851602 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9rnjj" Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.854752 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c4828519-a6ad-4851-b9c2-134a12f373ac","Type":"ContainerStarted","Data":"86d874a41642369dce4dd58887083ba2b388bcb17f891ead4338f7e5766be95a"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.859138 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3fbd9ebd-2c62-4336-9946-792e4b3c83db","Type":"ContainerStarted","Data":"61f7b5d99d2452de6e4d8656605e6960b108cae6255016243acdf0a52cab0bb3"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.861024 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerStarted","Data":"2f1c4527ea91dc9820b66da4572441c5dffdde6775cce4e11bab19a3a1e11f20"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.862375 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9","Type":"ContainerStarted","Data":"e9a91d5e5afc84cce47e4a33ec4add6c8a0eaf6d89cea22ece11f3feb6d26602"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.865114 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms" event={"ID":"25144c09-6edb-4bd3-89b2-99db486e733b","Type":"ContainerStarted","Data":"8d98079661eb31b43e8762d65ae663b2a8e031a99edf213a3ada95232b4f871b"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.866574 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd","Type":"ContainerStarted","Data":"e7c7365e9d44ae7aa920b9cd504a77ed9634760a8b4498e92a5b939ff58df2c5"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.867800 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62770fd6-1000-4477-ac95-7a4eaa489732","Type":"ContainerStarted","Data":"09491c40dac74dd8b3e4ad14b3220ce685e25da19709537bae0eb2fa8ee9cebb"} Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.931127 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9rnjj"] Nov 22 10:57:35 crc kubenswrapper[4772]: I1122 10:57:35.937497 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9rnjj"] Nov 22 10:57:36 crc kubenswrapper[4772]: I1122 10:57:36.877657 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9a852d5-2258-45b4-9076-95740059eecd","Type":"ContainerStarted","Data":"5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81"} Nov 22 10:57:36 crc kubenswrapper[4772]: I1122 10:57:36.879892 4772 generic.go:334] "Generic (PLEG): container finished" podID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerID="f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8" exitCode=0 Nov 22 10:57:36 crc kubenswrapper[4772]: I1122 10:57:36.880230 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" event={"ID":"9cd13b77-de2f-4a35-8e87-88f2ed802fc5","Type":"ContainerDied","Data":"f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8"} Nov 22 10:57:36 crc kubenswrapper[4772]: I1122 10:57:36.883539 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c4828519-a6ad-4851-b9c2-134a12f373ac","Type":"ContainerStarted","Data":"016163b669ebde3aeefc4073aae297eb488d84964eb236594d2492e583139946"} Nov 22 10:57:36 crc kubenswrapper[4772]: I1122 10:57:36.885944 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b","Type":"ContainerStarted","Data":"615036dea9b9e2690ff9781db1af8c7a6e8ede28c160b447b036e61977da8a12"} Nov 22 10:57:36 crc kubenswrapper[4772]: I1122 10:57:36.888307 4772 generic.go:334] "Generic (PLEG): container finished" podID="b480cbda-2219-4c23-a15c-abcc0a584417" containerID="84f782a78539a1554e149735df2b363a470a26484d0efce59c4094cfdc0e7e98" exitCode=0 Nov 22 10:57:36 crc kubenswrapper[4772]: I1122 10:57:36.888372 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" event={"ID":"b480cbda-2219-4c23-a15c-abcc0a584417","Type":"ContainerDied","Data":"84f782a78539a1554e149735df2b363a470a26484d0efce59c4094cfdc0e7e98"} Nov 22 10:57:37 crc kubenswrapper[4772]: I1122 10:57:37.427915 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502784b4-25b0-4473-9ec8-2ed73a0b34f2" path="/var/lib/kubelet/pods/502784b4-25b0-4473-9ec8-2ed73a0b34f2/volumes" Nov 22 10:57:37 crc kubenswrapper[4772]: I1122 10:57:37.898910 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea","Type":"ContainerStarted","Data":"e7defe2138029a1b0c0f3a9b0ab82ba765b27b46bc0e907fdcfcb9894a4cef37"} Nov 22 10:57:40 crc kubenswrapper[4772]: I1122 10:57:40.920417 4772 generic.go:334] "Generic (PLEG): container finished" podID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerID="016163b669ebde3aeefc4073aae297eb488d84964eb236594d2492e583139946" exitCode=0 Nov 22 10:57:40 crc kubenswrapper[4772]: I1122 10:57:40.920493 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c4828519-a6ad-4851-b9c2-134a12f373ac","Type":"ContainerDied","Data":"016163b669ebde3aeefc4073aae297eb488d84964eb236594d2492e583139946"} Nov 22 10:57:40 crc kubenswrapper[4772]: I1122 10:57:40.925355 4772 generic.go:334] "Generic (PLEG): container finished" podID="c9a852d5-2258-45b4-9076-95740059eecd" containerID="5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81" exitCode=0 Nov 22 10:57:40 crc kubenswrapper[4772]: I1122 10:57:40.925402 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9a852d5-2258-45b4-9076-95740059eecd","Type":"ContainerDied","Data":"5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81"} Nov 22 10:57:49 crc kubenswrapper[4772]: E1122 10:57:49.987452 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 22 10:57:49 crc kubenswrapper[4772]: E1122 10:57:49.988031 4772 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 22 10:57:49 crc kubenswrapper[4772]: E1122 10:57:49.988190 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnrjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 10:57:49 crc kubenswrapper[4772]: E1122 10:57:49.991413 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.026423 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerStarted","Data":"8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.028484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms" event={"ID":"25144c09-6edb-4bd3-89b2-99db486e733b","Type":"ContainerStarted","Data":"35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.030564 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" event={"ID":"9cd13b77-de2f-4a35-8e87-88f2ed802fc5","Type":"ContainerStarted","Data":"05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.034164 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c4828519-a6ad-4851-b9c2-134a12f373ac","Type":"ContainerStarted","Data":"98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.037520 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" event={"ID":"b480cbda-2219-4c23-a15c-abcc0a584417","Type":"ContainerStarted","Data":"103fb3454e6a998373a744c597d1f55f4eadad672abca3471c857a9652174978"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.039872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9a852d5-2258-45b4-9076-95740059eecd","Type":"ContainerStarted","Data":"fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.041460 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3fbd9ebd-2c62-4336-9946-792e4b3c83db","Type":"ContainerStarted","Data":"98e7f008c22226185b9fc4da3d836b571692bf43deee9c747a6c68730a3601a5"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.043626 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9","Type":"ContainerStarted","Data":"c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94"} Nov 22 10:57:50 crc kubenswrapper[4772]: I1122 10:57:50.045523 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62770fd6-1000-4477-ac95-7a4eaa489732","Type":"ContainerStarted","Data":"2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37"} Nov 22 10:57:50 crc kubenswrapper[4772]: E1122 10:57:50.048882 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" Nov 22 10:57:51 crc kubenswrapper[4772]: I1122 10:57:51.054292 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 22 10:57:51 crc kubenswrapper[4772]: I1122 10:57:51.074985 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.323434627 podStartE2EDuration="33.074951464s" podCreationTimestamp="2025-11-22 10:57:18 +0000 UTC" firstStartedPulling="2025-11-22 10:57:35.441927808 +0000 UTC m=+1175.681372302" lastFinishedPulling="2025-11-22 10:57:40.193444645 +0000 UTC m=+1180.432889139" observedRunningTime="2025-11-22 10:57:51.0696631 +0000 UTC m=+1191.309107584" watchObservedRunningTime="2025-11-22 10:57:51.074951464 +0000 UTC m=+1191.314395958" Nov 22 10:57:52 crc kubenswrapper[4772]: I1122 10:57:52.063247 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-267ms" Nov 22 10:57:52 crc kubenswrapper[4772]: I1122 10:57:52.089608 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.571424882 podStartE2EDuration="35.089591732s" podCreationTimestamp="2025-11-22 10:57:17 +0000 UTC" firstStartedPulling="2025-11-22 10:57:35.428387833 +0000 UTC m=+1175.667832327" lastFinishedPulling="2025-11-22 10:57:35.946554673 +0000 UTC m=+1176.185999177" observedRunningTime="2025-11-22 10:57:52.085834176 +0000 UTC m=+1192.325278670" watchObservedRunningTime="2025-11-22 10:57:52.089591732 +0000 UTC m=+1192.329036226" Nov 22 10:57:52 crc kubenswrapper[4772]: I1122 10:57:52.113437 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-267ms" podStartSLOduration=23.230489682 podStartE2EDuration="28.11342129s" podCreationTimestamp="2025-11-22 10:57:24 +0000 UTC" firstStartedPulling="2025-11-22 10:57:35.442515423 +0000 UTC m=+1175.681959917" lastFinishedPulling="2025-11-22 10:57:40.325447031 +0000 UTC m=+1180.564891525" observedRunningTime="2025-11-22 10:57:52.104508653 +0000 UTC m=+1192.343953147" watchObservedRunningTime="2025-11-22 10:57:52.11342129 +0000 UTC m=+1192.352865774" Nov 22 10:57:52 crc kubenswrapper[4772]: I1122 10:57:52.156293 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" podStartSLOduration=20.954865129 podStartE2EDuration="38.156267782s" podCreationTimestamp="2025-11-22 10:57:14 +0000 UTC" firstStartedPulling="2025-11-22 10:57:15.397161345 +0000 UTC m=+1155.636605839" lastFinishedPulling="2025-11-22 10:57:32.598563998 +0000 UTC m=+1172.838008492" observedRunningTime="2025-11-22 10:57:52.139164966 +0000 UTC m=+1192.378609480" watchObservedRunningTime="2025-11-22 10:57:52.156267782 +0000 UTC m=+1192.395712286" Nov 22 10:57:52 crc kubenswrapper[4772]: I1122 10:57:52.161155 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" podStartSLOduration=18.298703107 podStartE2EDuration="38.161136526s" podCreationTimestamp="2025-11-22 10:57:14 +0000 UTC" firstStartedPulling="2025-11-22 10:57:15.8454932 +0000 UTC m=+1156.084937694" lastFinishedPulling="2025-11-22 10:57:35.707926629 +0000 UTC m=+1175.947371113" observedRunningTime="2025-11-22 10:57:52.15499421 +0000 UTC m=+1192.394438714" watchObservedRunningTime="2025-11-22 10:57:52.161136526 +0000 UTC m=+1192.400581020" Nov 22 10:57:52 crc kubenswrapper[4772]: I1122 10:57:52.192615 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.334476475 podStartE2EDuration="36.192597548s" podCreationTimestamp="2025-11-22 10:57:16 +0000 UTC" firstStartedPulling="2025-11-22 10:57:17.843761492 +0000 UTC m=+1158.083205986" lastFinishedPulling="2025-11-22 10:57:35.701882575 +0000 UTC m=+1175.941327059" observedRunningTime="2025-11-22 10:57:52.182626684 +0000 UTC m=+1192.422071198" watchObservedRunningTime="2025-11-22 10:57:52.192597548 +0000 UTC m=+1192.432042042" Nov 22 10:57:54 crc kubenswrapper[4772]: I1122 10:57:54.079113 4772 generic.go:334] "Generic (PLEG): container finished" podID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerID="8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5" exitCode=0 Nov 22 10:57:54 crc kubenswrapper[4772]: I1122 10:57:54.079155 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerDied","Data":"8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5"} Nov 22 10:57:54 crc kubenswrapper[4772]: I1122 10:57:54.187745 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 22 10:57:54 crc kubenswrapper[4772]: I1122 10:57:54.827360 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:54 crc kubenswrapper[4772]: I1122 10:57:54.829165 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:55 crc kubenswrapper[4772]: I1122 10:57:55.331498 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:55 crc kubenswrapper[4772]: I1122 10:57:55.332457 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:57:55 crc kubenswrapper[4772]: I1122 10:57:55.390478 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t5gk7"] Nov 22 10:57:56 crc kubenswrapper[4772]: I1122 10:57:56.094878 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerStarted","Data":"b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840"} Nov 22 10:57:56 crc kubenswrapper[4772]: I1122 10:57:56.095269 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" podUID="b480cbda-2219-4c23-a15c-abcc0a584417" containerName="dnsmasq-dns" containerID="cri-o://103fb3454e6a998373a744c597d1f55f4eadad672abca3471c857a9652174978" gracePeriod=10 Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.103631 4772 generic.go:334] "Generic (PLEG): container finished" podID="b480cbda-2219-4c23-a15c-abcc0a584417" containerID="103fb3454e6a998373a744c597d1f55f4eadad672abca3471c857a9652174978" exitCode=0 Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.103833 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" event={"ID":"b480cbda-2219-4c23-a15c-abcc0a584417","Type":"ContainerDied","Data":"103fb3454e6a998373a744c597d1f55f4eadad672abca3471c857a9652174978"} Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.106832 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerStarted","Data":"b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea"} Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.107091 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.107125 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.132155 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qvtmm" podStartSLOduration=28.338021575 podStartE2EDuration="33.132131619s" podCreationTimestamp="2025-11-22 10:57:24 +0000 UTC" firstStartedPulling="2025-11-22 10:57:35.464227746 +0000 UTC m=+1175.703672250" lastFinishedPulling="2025-11-22 10:57:40.25833778 +0000 UTC m=+1180.497782294" observedRunningTime="2025-11-22 10:57:57.12627762 +0000 UTC m=+1197.365722124" watchObservedRunningTime="2025-11-22 10:57:57.132131619 +0000 UTC m=+1197.371576113" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.349613 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.349672 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.524953 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.601363 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-dns-svc\") pod \"b480cbda-2219-4c23-a15c-abcc0a584417\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.601594 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-config\") pod \"b480cbda-2219-4c23-a15c-abcc0a584417\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.601687 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69n4\" (UniqueName: \"kubernetes.io/projected/b480cbda-2219-4c23-a15c-abcc0a584417-kube-api-access-w69n4\") pod \"b480cbda-2219-4c23-a15c-abcc0a584417\" (UID: \"b480cbda-2219-4c23-a15c-abcc0a584417\") " Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.607157 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b480cbda-2219-4c23-a15c-abcc0a584417-kube-api-access-w69n4" (OuterVolumeSpecName: "kube-api-access-w69n4") pod "b480cbda-2219-4c23-a15c-abcc0a584417" (UID: "b480cbda-2219-4c23-a15c-abcc0a584417"). InnerVolumeSpecName "kube-api-access-w69n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.641911 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b480cbda-2219-4c23-a15c-abcc0a584417" (UID: "b480cbda-2219-4c23-a15c-abcc0a584417"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.645710 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-config" (OuterVolumeSpecName: "config") pod "b480cbda-2219-4c23-a15c-abcc0a584417" (UID: "b480cbda-2219-4c23-a15c-abcc0a584417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.703169 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.703201 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69n4\" (UniqueName: \"kubernetes.io/projected/b480cbda-2219-4c23-a15c-abcc0a584417-kube-api-access-w69n4\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:57 crc kubenswrapper[4772]: I1122 10:57:57.703215 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b480cbda-2219-4c23-a15c-abcc0a584417-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.116485 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" event={"ID":"b480cbda-2219-4c23-a15c-abcc0a584417","Type":"ContainerDied","Data":"d6f256dd410fdb362e5061086b06689dce080883567bd7c5cee184efc4c50951"} Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.116523 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-t5gk7" Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.116553 4772 scope.go:117] "RemoveContainer" containerID="103fb3454e6a998373a744c597d1f55f4eadad672abca3471c857a9652174978" Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.143127 4772 scope.go:117] "RemoveContainer" containerID="84f782a78539a1554e149735df2b363a470a26484d0efce59c4094cfdc0e7e98" Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.149526 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t5gk7"] Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.155678 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-t5gk7"] Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.921558 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:58 crc kubenswrapper[4772]: I1122 10:57:58.921911 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 22 10:57:59 crc kubenswrapper[4772]: I1122 10:57:59.423080 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b480cbda-2219-4c23-a15c-abcc0a584417" path="/var/lib/kubelet/pods/b480cbda-2219-4c23-a15c-abcc0a584417/volumes" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.847095 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-lznk4"] Nov 22 10:58:00 crc kubenswrapper[4772]: E1122 10:58:00.847703 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b480cbda-2219-4c23-a15c-abcc0a584417" containerName="init" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.847714 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b480cbda-2219-4c23-a15c-abcc0a584417" containerName="init" Nov 22 10:58:00 crc kubenswrapper[4772]: E1122 10:58:00.847735 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b480cbda-2219-4c23-a15c-abcc0a584417" containerName="dnsmasq-dns" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.847742 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b480cbda-2219-4c23-a15c-abcc0a584417" containerName="dnsmasq-dns" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.847886 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b480cbda-2219-4c23-a15c-abcc0a584417" containerName="dnsmasq-dns" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.848716 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.872291 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-lznk4"] Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.949345 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqg75\" (UniqueName: \"kubernetes.io/projected/e02ee18c-0680-4728-a8a2-7c3734a3fc13-kube-api-access-hqg75\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.949516 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-config\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:00 crc kubenswrapper[4772]: I1122 10:58:00.949558 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.051219 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-config\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.051274 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.051317 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqg75\" (UniqueName: \"kubernetes.io/projected/e02ee18c-0680-4728-a8a2-7c3734a3fc13-kube-api-access-hqg75\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.052313 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-config\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.052334 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.086703 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqg75\" (UniqueName: \"kubernetes.io/projected/e02ee18c-0680-4728-a8a2-7c3734a3fc13-kube-api-access-hqg75\") pod \"dnsmasq-dns-7cb5889db5-lznk4\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.173756 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.533459 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.533757 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.533801 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.534509 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a69ae46ae795f0c272467d20a88d4d3efbdd2e5ec86370c20bc8c57f8ee1677e"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.534590 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://a69ae46ae795f0c272467d20a88d4d3efbdd2e5ec86370c20bc8c57f8ee1677e" gracePeriod=600 Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.951981 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.960611 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.967778 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.967808 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.969263 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.969970 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 10:58:01 crc kubenswrapper[4772]: I1122 10:58:01.970693 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-tpw2s" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.066516 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.066580 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-cache\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.066609 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.066666 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl22s\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-kube-api-access-kl22s\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.066865 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-lock\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.167928 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-lock\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.168019 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.168094 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-cache\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.168116 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.168160 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl22s\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-kube-api-access-kl22s\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.168788 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.169109 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-lock\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: E1122 10:58:02.169205 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 10:58:02 crc kubenswrapper[4772]: E1122 10:58:02.169238 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 10:58:02 crc kubenswrapper[4772]: E1122 10:58:02.169288 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift podName:354e52a7-830a-43a1-ad15-a13fe2a07222 nodeName:}" failed. No retries permitted until 2025-11-22 10:58:02.669271219 +0000 UTC m=+1202.908715713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift") pod "swift-storage-0" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222") : configmap "swift-ring-files" not found Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.169591 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-cache\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.188171 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.188353 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl22s\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-kube-api-access-kl22s\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.460473 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4gl45"] Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.461712 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.463505 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.464058 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.464479 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.472179 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-etc-swift\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.472221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-scripts\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.472261 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbdbv\" (UniqueName: \"kubernetes.io/projected/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-kube-api-access-zbdbv\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.472324 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-ring-data-devices\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.472358 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-dispersionconf\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.472373 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-swiftconf\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.472396 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-combined-ca-bundle\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.478569 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4gl45"] Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.573644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-etc-swift\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.573726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-scripts\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.573770 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbdbv\" (UniqueName: \"kubernetes.io/projected/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-kube-api-access-zbdbv\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.573834 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-ring-data-devices\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.573872 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-dispersionconf\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.573890 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-swiftconf\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.573915 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-combined-ca-bundle\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.574959 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-etc-swift\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.577178 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-ring-data-devices\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.580186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-swiftconf\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.581386 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-combined-ca-bundle\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.593434 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbdbv\" (UniqueName: \"kubernetes.io/projected/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-kube-api-access-zbdbv\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.601359 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-dispersionconf\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.612369 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-scripts\") pod \"swift-ring-rebalance-4gl45\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.707687 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:02 crc kubenswrapper[4772]: E1122 10:58:02.707850 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 10:58:02 crc kubenswrapper[4772]: E1122 10:58:02.707879 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 10:58:02 crc kubenswrapper[4772]: E1122 10:58:02.707934 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift podName:354e52a7-830a-43a1-ad15-a13fe2a07222 nodeName:}" failed. No retries permitted until 2025-11-22 10:58:03.707917722 +0000 UTC m=+1203.947362216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift") pod "swift-storage-0" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222") : configmap "swift-ring-files" not found Nov 22 10:58:02 crc kubenswrapper[4772]: I1122 10:58:02.779381 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:03 crc kubenswrapper[4772]: I1122 10:58:03.165647 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="a69ae46ae795f0c272467d20a88d4d3efbdd2e5ec86370c20bc8c57f8ee1677e" exitCode=0 Nov 22 10:58:03 crc kubenswrapper[4772]: I1122 10:58:03.165715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"a69ae46ae795f0c272467d20a88d4d3efbdd2e5ec86370c20bc8c57f8ee1677e"} Nov 22 10:58:03 crc kubenswrapper[4772]: I1122 10:58:03.165788 4772 scope.go:117] "RemoveContainer" containerID="6ed5ce78086a642e7415af1fd3d7071bae3e10a61431b8613ee77406e828d8f3" Nov 22 10:58:03 crc kubenswrapper[4772]: I1122 10:58:03.425573 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 22 10:58:03 crc kubenswrapper[4772]: I1122 10:58:03.486248 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 22 10:58:03 crc kubenswrapper[4772]: I1122 10:58:03.732715 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:03 crc kubenswrapper[4772]: E1122 10:58:03.732882 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 10:58:03 crc kubenswrapper[4772]: E1122 10:58:03.732999 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 10:58:03 crc kubenswrapper[4772]: E1122 10:58:03.733067 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift podName:354e52a7-830a-43a1-ad15-a13fe2a07222 nodeName:}" failed. No retries permitted until 2025-11-22 10:58:05.733037666 +0000 UTC m=+1205.972482160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift") pod "swift-storage-0" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222") : configmap "swift-ring-files" not found Nov 22 10:58:03 crc kubenswrapper[4772]: I1122 10:58:03.922935 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-lznk4"] Nov 22 10:58:03 crc kubenswrapper[4772]: W1122 10:58:03.932898 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode02ee18c_0680_4728_a8a2_7c3734a3fc13.slice/crio-26b5c08ca78868faea084591a5866aac2064b557e2cc2a69bcea48c890b7ea4c WatchSource:0}: Error finding container 26b5c08ca78868faea084591a5866aac2064b557e2cc2a69bcea48c890b7ea4c: Status 404 returned error can't find the container with id 26b5c08ca78868faea084591a5866aac2064b557e2cc2a69bcea48c890b7ea4c Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.189393 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" event={"ID":"e02ee18c-0680-4728-a8a2-7c3734a3fc13","Type":"ContainerStarted","Data":"26b5c08ca78868faea084591a5866aac2064b557e2cc2a69bcea48c890b7ea4c"} Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.192907 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"95c963c954cabecad461116172cc9ea88ff81fed386024819164050fa8d713ce"} Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.214526 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4gl45"] Nov 22 10:58:04 crc kubenswrapper[4772]: W1122 10:58:04.223663 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5f43d3_c6db_43c9_bb90_1d22bbe94aa3.slice/crio-4d6b9ded73c873b9715f17d1d4c70f230b89ca04a799f214e6b83fd49ee5b6c6 WatchSource:0}: Error finding container 4d6b9ded73c873b9715f17d1d4c70f230b89ca04a799f214e6b83fd49ee5b6c6: Status 404 returned error can't find the container with id 4d6b9ded73c873b9715f17d1d4c70f230b89ca04a799f214e6b83fd49ee5b6c6 Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.438518 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7x5tj"] Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.440025 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7x5tj" Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.451102 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7x5tj"] Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.550832 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twrc\" (UniqueName: \"kubernetes.io/projected/bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2-kube-api-access-9twrc\") pod \"glance-db-create-7x5tj\" (UID: \"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2\") " pod="openstack/glance-db-create-7x5tj" Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.652215 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twrc\" (UniqueName: \"kubernetes.io/projected/bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2-kube-api-access-9twrc\") pod \"glance-db-create-7x5tj\" (UID: \"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2\") " pod="openstack/glance-db-create-7x5tj" Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.672579 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twrc\" (UniqueName: \"kubernetes.io/projected/bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2-kube-api-access-9twrc\") pod \"glance-db-create-7x5tj\" (UID: \"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2\") " pod="openstack/glance-db-create-7x5tj" Nov 22 10:58:04 crc kubenswrapper[4772]: I1122 10:58:04.833818 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7x5tj" Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.202445 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd","Type":"ContainerStarted","Data":"dabf2af73310ade5ca6d91856e667382ab1c35581ac6b0df456c91a5469c6edd"} Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.202913 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.204569 4772 generic.go:334] "Generic (PLEG): container finished" podID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerID="da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be" exitCode=0 Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.204633 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" event={"ID":"e02ee18c-0680-4728-a8a2-7c3734a3fc13","Type":"ContainerDied","Data":"da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be"} Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.206535 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62770fd6-1000-4477-ac95-7a4eaa489732","Type":"ContainerStarted","Data":"ccf4d7895ba000a2440b35a71263cfe3dbaecea5399c4efbf9b39db555947784"} Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.208422 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4gl45" event={"ID":"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3","Type":"ContainerStarted","Data":"4d6b9ded73c873b9715f17d1d4c70f230b89ca04a799f214e6b83fd49ee5b6c6"} Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.211819 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9","Type":"ContainerStarted","Data":"a439d4617a333c8788694a3ee6b6ab83b29b0e870445341c5c8b15c85d68a363"} Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.219175 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.709555226 podStartE2EDuration="45.219157615s" podCreationTimestamp="2025-11-22 10:57:20 +0000 UTC" firstStartedPulling="2025-11-22 10:57:35.429640635 +0000 UTC m=+1175.669085129" lastFinishedPulling="2025-11-22 10:58:03.939243024 +0000 UTC m=+1204.178687518" observedRunningTime="2025-11-22 10:58:05.217190415 +0000 UTC m=+1205.456634909" watchObservedRunningTime="2025-11-22 10:58:05.219157615 +0000 UTC m=+1205.458602099" Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.280777 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.781404368 podStartE2EDuration="42.280753935s" podCreationTimestamp="2025-11-22 10:57:23 +0000 UTC" firstStartedPulling="2025-11-22 10:57:35.434437567 +0000 UTC m=+1175.673882061" lastFinishedPulling="2025-11-22 10:58:03.933787144 +0000 UTC m=+1204.173231628" observedRunningTime="2025-11-22 10:58:05.270177316 +0000 UTC m=+1205.509621810" watchObservedRunningTime="2025-11-22 10:58:05.280753935 +0000 UTC m=+1205.520198439" Nov 22 10:58:05 crc kubenswrapper[4772]: W1122 10:58:05.286520 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbef192e_2ee8_4ff4_aad2_4eb3f61e39e2.slice/crio-4bb18b3631e1e091ff18c3734a88a2296b1cceecb163873fd64d1396c5e28df8 WatchSource:0}: Error finding container 4bb18b3631e1e091ff18c3734a88a2296b1cceecb163873fd64d1396c5e28df8: Status 404 returned error can't find the container with id 4bb18b3631e1e091ff18c3734a88a2296b1cceecb163873fd64d1396c5e28df8 Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.292916 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7x5tj"] Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.297842 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.897398644 podStartE2EDuration="38.29782317s" podCreationTimestamp="2025-11-22 10:57:27 +0000 UTC" firstStartedPulling="2025-11-22 10:57:35.447324085 +0000 UTC m=+1175.686768579" lastFinishedPulling="2025-11-22 10:58:03.847748611 +0000 UTC m=+1204.087193105" observedRunningTime="2025-11-22 10:58:05.290202096 +0000 UTC m=+1205.529646590" watchObservedRunningTime="2025-11-22 10:58:05.29782317 +0000 UTC m=+1205.537267664" Nov 22 10:58:05 crc kubenswrapper[4772]: I1122 10:58:05.768951 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:05 crc kubenswrapper[4772]: E1122 10:58:05.769202 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 10:58:05 crc kubenswrapper[4772]: E1122 10:58:05.769481 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 10:58:05 crc kubenswrapper[4772]: E1122 10:58:05.769548 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift podName:354e52a7-830a-43a1-ad15-a13fe2a07222 nodeName:}" failed. No retries permitted until 2025-11-22 10:58:09.769528676 +0000 UTC m=+1210.008973180 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift") pod "swift-storage-0" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222") : configmap "swift-ring-files" not found Nov 22 10:58:06 crc kubenswrapper[4772]: I1122 10:58:06.220622 4772 generic.go:334] "Generic (PLEG): container finished" podID="bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2" containerID="5a27c5f251697b251ad3b6e3e178de4fa4415aca1c15a4e7c11f74e3fb032c2d" exitCode=0 Nov 22 10:58:06 crc kubenswrapper[4772]: I1122 10:58:06.220670 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7x5tj" event={"ID":"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2","Type":"ContainerDied","Data":"5a27c5f251697b251ad3b6e3e178de4fa4415aca1c15a4e7c11f74e3fb032c2d"} Nov 22 10:58:06 crc kubenswrapper[4772]: I1122 10:58:06.220730 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7x5tj" event={"ID":"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2","Type":"ContainerStarted","Data":"4bb18b3631e1e091ff18c3734a88a2296b1cceecb163873fd64d1396c5e28df8"} Nov 22 10:58:06 crc kubenswrapper[4772]: I1122 10:58:06.222421 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" event={"ID":"e02ee18c-0680-4728-a8a2-7c3734a3fc13","Type":"ContainerStarted","Data":"01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44"} Nov 22 10:58:06 crc kubenswrapper[4772]: I1122 10:58:06.252055 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" podStartSLOduration=6.252025928 podStartE2EDuration="6.252025928s" podCreationTimestamp="2025-11-22 10:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:58:06.248013155 +0000 UTC m=+1206.487457649" watchObservedRunningTime="2025-11-22 10:58:06.252025928 +0000 UTC m=+1206.491470422" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.002055 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.039220 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.051136 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.087011 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.229985 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.230029 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.268898 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.480306 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.531337 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-lznk4"] Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.550222 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.561635 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-bxrgk"] Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.563222 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.565505 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.604359 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-bxrgk"] Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.646583 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rh9km"] Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.648214 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.653060 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.663126 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rh9km"] Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707640 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-config\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707697 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707719 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovs-rundir\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707762 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707781 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707803 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovn-rundir\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707823 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjnc8\" (UniqueName: \"kubernetes.io/projected/6f8983d0-bcff-45de-b158-351e12a0b0f3-kube-api-access-kjnc8\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8983d0-bcff-45de-b158-351e12a0b0f3-config\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707877 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8565m\" (UniqueName: \"kubernetes.io/projected/b7ca85a8-325b-476b-adac-a99e6417333a-kube-api-access-8565m\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.707907 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-combined-ca-bundle\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809353 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809398 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809426 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovn-rundir\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809456 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjnc8\" (UniqueName: \"kubernetes.io/projected/6f8983d0-bcff-45de-b158-351e12a0b0f3-kube-api-access-kjnc8\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8983d0-bcff-45de-b158-351e12a0b0f3-config\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8565m\" (UniqueName: \"kubernetes.io/projected/b7ca85a8-325b-476b-adac-a99e6417333a-kube-api-access-8565m\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809582 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-combined-ca-bundle\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809630 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-config\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809660 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.809684 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovs-rundir\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.810021 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovs-rundir\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.810123 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovn-rundir\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.810813 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8983d0-bcff-45de-b158-351e12a0b0f3-config\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.811042 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.811177 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-config\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.811195 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.816473 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.821840 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-combined-ca-bundle\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.824494 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjnc8\" (UniqueName: \"kubernetes.io/projected/6f8983d0-bcff-45de-b158-351e12a0b0f3-kube-api-access-kjnc8\") pod \"ovn-controller-metrics-rh9km\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.834920 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8565m\" (UniqueName: \"kubernetes.io/projected/b7ca85a8-325b-476b-adac-a99e6417333a-kube-api-access-8565m\") pod \"dnsmasq-dns-74f6f696b9-bxrgk\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.888622 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.936309 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-bxrgk"] Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.966287 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-clgxb"] Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.968535 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.970956 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.980545 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rh9km" Nov 22 10:58:07 crc kubenswrapper[4772]: I1122 10:58:07.980887 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-clgxb"] Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.012866 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqltv\" (UniqueName: \"kubernetes.io/projected/1909bd69-e033-40b5-90f0-03e4450145fb-kube-api-access-kqltv\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.012918 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-dns-svc\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.012940 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-config\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.013149 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.013274 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.019925 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7x5tj" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.114387 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9twrc\" (UniqueName: \"kubernetes.io/projected/bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2-kube-api-access-9twrc\") pod \"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2\" (UID: \"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2\") " Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.114699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.114753 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.114836 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqltv\" (UniqueName: \"kubernetes.io/projected/1909bd69-e033-40b5-90f0-03e4450145fb-kube-api-access-kqltv\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.114868 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-dns-svc\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.114899 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-config\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.117764 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-dns-svc\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.117887 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-config\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.118405 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.118578 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.120400 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2-kube-api-access-9twrc" (OuterVolumeSpecName: "kube-api-access-9twrc") pod "bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2" (UID: "bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2"). InnerVolumeSpecName "kube-api-access-9twrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.135197 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqltv\" (UniqueName: \"kubernetes.io/projected/1909bd69-e033-40b5-90f0-03e4450145fb-kube-api-access-kqltv\") pod \"dnsmasq-dns-698758b865-clgxb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.216789 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9twrc\" (UniqueName: \"kubernetes.io/projected/bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2-kube-api-access-9twrc\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.239959 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7x5tj" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.242376 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7x5tj" event={"ID":"bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2","Type":"ContainerDied","Data":"4bb18b3631e1e091ff18c3734a88a2296b1cceecb163873fd64d1396c5e28df8"} Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.242412 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb18b3631e1e091ff18c3734a88a2296b1cceecb163873fd64d1396c5e28df8" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.242435 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.293357 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.337016 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.543940 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-bxrgk"] Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.609932 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 22 10:58:08 crc kubenswrapper[4772]: E1122 10:58:08.610649 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2" containerName="mariadb-database-create" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.610747 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2" containerName="mariadb-database-create" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.611195 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2" containerName="mariadb-database-create" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.616959 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.620997 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4zgcj" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.621483 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.621662 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.621812 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.633504 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.677613 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rh9km"] Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.730673 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-config\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.730861 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-scripts\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.730901 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.730919 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.730938 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.730960 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcvn\" (UniqueName: \"kubernetes.io/projected/4f74827a-8354-492b-b09d-350768ba912d-kube-api-access-rjcvn\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.731119 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f74827a-8354-492b-b09d-350768ba912d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.785181 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fphqf"] Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.789622 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fphqf" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.797663 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fphqf"] Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832326 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832359 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832377 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832405 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcvn\" (UniqueName: \"kubernetes.io/projected/4f74827a-8354-492b-b09d-350768ba912d-kube-api-access-rjcvn\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832434 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f74827a-8354-492b-b09d-350768ba912d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832570 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-config\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832626 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jbb\" (UniqueName: \"kubernetes.io/projected/ae389e1d-b9a4-4ba1-a746-023c65c68e15-kube-api-access-s6jbb\") pod \"keystone-db-create-fphqf\" (UID: \"ae389e1d-b9a4-4ba1-a746-023c65c68e15\") " pod="openstack/keystone-db-create-fphqf" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.832673 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-scripts\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.834787 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-scripts\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.835001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f74827a-8354-492b-b09d-350768ba912d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.837240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-config\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.840085 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.840290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.844942 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.862470 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcvn\" (UniqueName: \"kubernetes.io/projected/4f74827a-8354-492b-b09d-350768ba912d-kube-api-access-rjcvn\") pod \"ovn-northd-0\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.934839 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6jbb\" (UniqueName: \"kubernetes.io/projected/ae389e1d-b9a4-4ba1-a746-023c65c68e15-kube-api-access-s6jbb\") pod \"keystone-db-create-fphqf\" (UID: \"ae389e1d-b9a4-4ba1-a746-023c65c68e15\") " pod="openstack/keystone-db-create-fphqf" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.951685 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6jbb\" (UniqueName: \"kubernetes.io/projected/ae389e1d-b9a4-4ba1-a746-023c65c68e15-kube-api-access-s6jbb\") pod \"keystone-db-create-fphqf\" (UID: \"ae389e1d-b9a4-4ba1-a746-023c65c68e15\") " pod="openstack/keystone-db-create-fphqf" Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.976095 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-clgxb"] Nov 22 10:58:08 crc kubenswrapper[4772]: I1122 10:58:08.976106 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 10:58:08 crc kubenswrapper[4772]: W1122 10:58:08.986223 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1909bd69_e033_40b5_90f0_03e4450145fb.slice/crio-b609d3b257083b7ee74f66c25df81bac338c2653548e1212f20decf48ac2d433 WatchSource:0}: Error finding container b609d3b257083b7ee74f66c25df81bac338c2653548e1212f20decf48ac2d433: Status 404 returned error can't find the container with id b609d3b257083b7ee74f66c25df81bac338c2653548e1212f20decf48ac2d433 Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.111442 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fphqf" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.131894 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9846d"] Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.134792 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9846d" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.138011 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl28p\" (UniqueName: \"kubernetes.io/projected/6851a02e-cce0-44b6-af9a-f6ab928ddbe1-kube-api-access-kl28p\") pod \"placement-db-create-9846d\" (UID: \"6851a02e-cce0-44b6-af9a-f6ab928ddbe1\") " pod="openstack/placement-db-create-9846d" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.155140 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9846d"] Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.240245 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl28p\" (UniqueName: \"kubernetes.io/projected/6851a02e-cce0-44b6-af9a-f6ab928ddbe1-kube-api-access-kl28p\") pod \"placement-db-create-9846d\" (UID: \"6851a02e-cce0-44b6-af9a-f6ab928ddbe1\") " pod="openstack/placement-db-create-9846d" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.261190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl28p\" (UniqueName: \"kubernetes.io/projected/6851a02e-cce0-44b6-af9a-f6ab928ddbe1-kube-api-access-kl28p\") pod \"placement-db-create-9846d\" (UID: \"6851a02e-cce0-44b6-af9a-f6ab928ddbe1\") " pod="openstack/placement-db-create-9846d" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.261781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-clgxb" event={"ID":"1909bd69-e033-40b5-90f0-03e4450145fb","Type":"ContainerStarted","Data":"b609d3b257083b7ee74f66c25df81bac338c2653548e1212f20decf48ac2d433"} Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.269463 4772 generic.go:334] "Generic (PLEG): container finished" podID="b7ca85a8-325b-476b-adac-a99e6417333a" containerID="a2bff9294c6512529c40cad10adfde21a027a18b1df22e001db82b427e0e607a" exitCode=0 Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.269514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" event={"ID":"b7ca85a8-325b-476b-adac-a99e6417333a","Type":"ContainerDied","Data":"a2bff9294c6512529c40cad10adfde21a027a18b1df22e001db82b427e0e607a"} Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.269580 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" event={"ID":"b7ca85a8-325b-476b-adac-a99e6417333a","Type":"ContainerStarted","Data":"527ef49014bc409b10051d41750456fb311763eaadded702c3851805aa9d2a1f"} Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.272938 4772 generic.go:334] "Generic (PLEG): container finished" podID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerID="e7defe2138029a1b0c0f3a9b0ab82ba765b27b46bc0e907fdcfcb9894a4cef37" exitCode=0 Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.273001 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea","Type":"ContainerDied","Data":"e7defe2138029a1b0c0f3a9b0ab82ba765b27b46bc0e907fdcfcb9894a4cef37"} Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.277968 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4gl45" event={"ID":"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3","Type":"ContainerStarted","Data":"78a2384ff1affb6057d0b82a09784a3ce79ec68a40fd733f4edb45b280fa6098"} Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.284993 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rh9km" event={"ID":"6f8983d0-bcff-45de-b158-351e12a0b0f3","Type":"ContainerStarted","Data":"a28ec3d3a2caa07f24ab430c4d4ad0be9e899c0e714b45a649f92e7bbae39317"} Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.298200 4772 generic.go:334] "Generic (PLEG): container finished" podID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerID="615036dea9b9e2690ff9781db1af8c7a6e8ede28c160b447b036e61977da8a12" exitCode=0 Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.299283 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" podUID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerName="dnsmasq-dns" containerID="cri-o://01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44" gracePeriod=10 Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.299614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b","Type":"ContainerDied","Data":"615036dea9b9e2690ff9781db1af8c7a6e8ede28c160b447b036e61977da8a12"} Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.336920 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4gl45" podStartSLOduration=3.510386649 podStartE2EDuration="7.336900055s" podCreationTimestamp="2025-11-22 10:58:02 +0000 UTC" firstStartedPulling="2025-11-22 10:58:04.22662743 +0000 UTC m=+1204.466071924" lastFinishedPulling="2025-11-22 10:58:08.053140836 +0000 UTC m=+1208.292585330" observedRunningTime="2025-11-22 10:58:09.329720622 +0000 UTC m=+1209.569165116" watchObservedRunningTime="2025-11-22 10:58:09.336900055 +0000 UTC m=+1209.576344549" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.458121 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9846d" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.472138 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.676597 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fphqf"] Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.682313 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.749927 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-config\") pod \"b7ca85a8-325b-476b-adac-a99e6417333a\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.749997 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-ovsdbserver-nb\") pod \"b7ca85a8-325b-476b-adac-a99e6417333a\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.750069 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-dns-svc\") pod \"b7ca85a8-325b-476b-adac-a99e6417333a\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.750142 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8565m\" (UniqueName: \"kubernetes.io/projected/b7ca85a8-325b-476b-adac-a99e6417333a-kube-api-access-8565m\") pod \"b7ca85a8-325b-476b-adac-a99e6417333a\" (UID: \"b7ca85a8-325b-476b-adac-a99e6417333a\") " Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.759443 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ca85a8-325b-476b-adac-a99e6417333a-kube-api-access-8565m" (OuterVolumeSpecName: "kube-api-access-8565m") pod "b7ca85a8-325b-476b-adac-a99e6417333a" (UID: "b7ca85a8-325b-476b-adac-a99e6417333a"). InnerVolumeSpecName "kube-api-access-8565m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.781794 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-config" (OuterVolumeSpecName: "config") pod "b7ca85a8-325b-476b-adac-a99e6417333a" (UID: "b7ca85a8-325b-476b-adac-a99e6417333a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.808491 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7ca85a8-325b-476b-adac-a99e6417333a" (UID: "b7ca85a8-325b-476b-adac-a99e6417333a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.828765 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7ca85a8-325b-476b-adac-a99e6417333a" (UID: "b7ca85a8-325b-476b-adac-a99e6417333a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.852222 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.852380 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.852399 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8565m\" (UniqueName: \"kubernetes.io/projected/b7ca85a8-325b-476b-adac-a99e6417333a-kube-api-access-8565m\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.852413 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.852426 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7ca85a8-325b-476b-adac-a99e6417333a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:09 crc kubenswrapper[4772]: E1122 10:58:09.852545 4772 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 10:58:09 crc kubenswrapper[4772]: E1122 10:58:09.852562 4772 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 10:58:09 crc kubenswrapper[4772]: E1122 10:58:09.852613 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift podName:354e52a7-830a-43a1-ad15-a13fe2a07222 nodeName:}" failed. No retries permitted until 2025-11-22 10:58:17.852593823 +0000 UTC m=+1218.092038317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift") pod "swift-storage-0" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222") : configmap "swift-ring-files" not found Nov 22 10:58:09 crc kubenswrapper[4772]: I1122 10:58:09.916624 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.018884 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9846d"] Nov 22 10:58:10 crc kubenswrapper[4772]: W1122 10:58:10.030599 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6851a02e_cce0_44b6_af9a_f6ab928ddbe1.slice/crio-12b8dfe7d137033215dad8a3bacbedbda951c54df3cd80c79ba18185c716b8ae WatchSource:0}: Error finding container 12b8dfe7d137033215dad8a3bacbedbda951c54df3cd80c79ba18185c716b8ae: Status 404 returned error can't find the container with id 12b8dfe7d137033215dad8a3bacbedbda951c54df3cd80c79ba18185c716b8ae Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.061278 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-dns-svc\") pod \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.061400 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqg75\" (UniqueName: \"kubernetes.io/projected/e02ee18c-0680-4728-a8a2-7c3734a3fc13-kube-api-access-hqg75\") pod \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.061544 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-config\") pod \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\" (UID: \"e02ee18c-0680-4728-a8a2-7c3734a3fc13\") " Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.074944 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02ee18c-0680-4728-a8a2-7c3734a3fc13-kube-api-access-hqg75" (OuterVolumeSpecName: "kube-api-access-hqg75") pod "e02ee18c-0680-4728-a8a2-7c3734a3fc13" (UID: "e02ee18c-0680-4728-a8a2-7c3734a3fc13"). InnerVolumeSpecName "kube-api-access-hqg75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.115760 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-config" (OuterVolumeSpecName: "config") pod "e02ee18c-0680-4728-a8a2-7c3734a3fc13" (UID: "e02ee18c-0680-4728-a8a2-7c3734a3fc13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.158784 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e02ee18c-0680-4728-a8a2-7c3734a3fc13" (UID: "e02ee18c-0680-4728-a8a2-7c3734a3fc13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.165433 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqg75\" (UniqueName: \"kubernetes.io/projected/e02ee18c-0680-4728-a8a2-7c3734a3fc13-kube-api-access-hqg75\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.165466 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.165477 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e02ee18c-0680-4728-a8a2-7c3734a3fc13-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.306995 4772 generic.go:334] "Generic (PLEG): container finished" podID="1909bd69-e033-40b5-90f0-03e4450145fb" containerID="c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd" exitCode=0 Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.307064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-clgxb" event={"ID":"1909bd69-e033-40b5-90f0-03e4450145fb","Type":"ContainerDied","Data":"c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.312359 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" event={"ID":"b7ca85a8-325b-476b-adac-a99e6417333a","Type":"ContainerDied","Data":"527ef49014bc409b10051d41750456fb311763eaadded702c3851805aa9d2a1f"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.312417 4772 scope.go:117] "RemoveContainer" containerID="a2bff9294c6512529c40cad10adfde21a027a18b1df22e001db82b427e0e607a" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.312561 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-bxrgk" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.317877 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea","Type":"ContainerStarted","Data":"96e2450010f46499b0808158113b617f9b05995cffcb394c9f26383aeac1a85f"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.318569 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.320432 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9846d" event={"ID":"6851a02e-cce0-44b6-af9a-f6ab928ddbe1","Type":"ContainerStarted","Data":"c6c1399307d1d09aaa50d08905f62ddc620cbd59c21efd5f9dba60c8c7eef11e"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.320461 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9846d" event={"ID":"6851a02e-cce0-44b6-af9a-f6ab928ddbe1","Type":"ContainerStarted","Data":"12b8dfe7d137033215dad8a3bacbedbda951c54df3cd80c79ba18185c716b8ae"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.322219 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rh9km" event={"ID":"6f8983d0-bcff-45de-b158-351e12a0b0f3","Type":"ContainerStarted","Data":"909a2fa4b2deee761e0eb8564a7f465913c706c02a0b3226886d10405cca066b"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.340420 4772 generic.go:334] "Generic (PLEG): container finished" podID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerID="01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44" exitCode=0 Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.340497 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" event={"ID":"e02ee18c-0680-4728-a8a2-7c3734a3fc13","Type":"ContainerDied","Data":"01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.340528 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" event={"ID":"e02ee18c-0680-4728-a8a2-7c3734a3fc13","Type":"ContainerDied","Data":"26b5c08ca78868faea084591a5866aac2064b557e2cc2a69bcea48c890b7ea4c"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.340548 4772 scope.go:117] "RemoveContainer" containerID="01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.340680 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-lznk4" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.356941 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4f74827a-8354-492b-b09d-350768ba912d","Type":"ContainerStarted","Data":"af7b8a2cc9f093f027c6ab19ddc6451bc602844205a3f0d9d50640b5c2990e6a"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.360566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b","Type":"ContainerStarted","Data":"6d2c4827d4cb49d5883df31ba20e437493bf839f2d63f4c0a3fdbedc0e23ec2c"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.361395 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.362830 4772 generic.go:334] "Generic (PLEG): container finished" podID="ae389e1d-b9a4-4ba1-a746-023c65c68e15" containerID="fd5d4abc41639900eb146687503de734a02b0f4b820d1f2e011edaa75a39ea55" exitCode=0 Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.363407 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fphqf" event={"ID":"ae389e1d-b9a4-4ba1-a746-023c65c68e15","Type":"ContainerDied","Data":"fd5d4abc41639900eb146687503de734a02b0f4b820d1f2e011edaa75a39ea55"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.363433 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fphqf" event={"ID":"ae389e1d-b9a4-4ba1-a746-023c65c68e15","Type":"ContainerStarted","Data":"4a48ad2ba0a731b2b4abe624621c4d8450a54cae889b5246d306fa8377112198"} Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.364272 4772 scope.go:117] "RemoveContainer" containerID="da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.370019 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.341936364 podStartE2EDuration="56.369997944s" podCreationTimestamp="2025-11-22 10:57:14 +0000 UTC" firstStartedPulling="2025-11-22 10:57:16.585314965 +0000 UTC m=+1156.824759459" lastFinishedPulling="2025-11-22 10:57:32.613376545 +0000 UTC m=+1172.852821039" observedRunningTime="2025-11-22 10:58:10.366393362 +0000 UTC m=+1210.605837866" watchObservedRunningTime="2025-11-22 10:58:10.369997944 +0000 UTC m=+1210.609442438" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.390580 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rh9km" podStartSLOduration=3.390562428 podStartE2EDuration="3.390562428s" podCreationTimestamp="2025-11-22 10:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:58:10.386740451 +0000 UTC m=+1210.626184945" watchObservedRunningTime="2025-11-22 10:58:10.390562428 +0000 UTC m=+1210.630006922" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.398234 4772 scope.go:117] "RemoveContainer" containerID="01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44" Nov 22 10:58:10 crc kubenswrapper[4772]: E1122 10:58:10.401936 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44\": container with ID starting with 01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44 not found: ID does not exist" containerID="01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.401983 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44"} err="failed to get container status \"01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44\": rpc error: code = NotFound desc = could not find container \"01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44\": container with ID starting with 01f6a1de58f612610223e95584911a264a026d65d31638cc8eded40ea0eb2e44 not found: ID does not exist" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.402009 4772 scope.go:117] "RemoveContainer" containerID="da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be" Nov 22 10:58:10 crc kubenswrapper[4772]: E1122 10:58:10.402509 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be\": container with ID starting with da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be not found: ID does not exist" containerID="da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.402534 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be"} err="failed to get container status \"da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be\": rpc error: code = NotFound desc = could not find container \"da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be\": container with ID starting with da4304fa2c3cd7e486ae5c649fc6d341534c29930b453dc307648e79258767be not found: ID does not exist" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.420828 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-9846d" podStartSLOduration=1.420809209 podStartE2EDuration="1.420809209s" podCreationTimestamp="2025-11-22 10:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:58:10.415545645 +0000 UTC m=+1210.654990139" watchObservedRunningTime="2025-11-22 10:58:10.420809209 +0000 UTC m=+1210.660253703" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.490556 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-bxrgk"] Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.497737 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-bxrgk"] Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.512127 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-lznk4"] Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.516790 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-lznk4"] Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.540756 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.798732533 podStartE2EDuration="55.540740707s" podCreationTimestamp="2025-11-22 10:57:15 +0000 UTC" firstStartedPulling="2025-11-22 10:57:16.833137047 +0000 UTC m=+1157.072581541" lastFinishedPulling="2025-11-22 10:57:32.575145221 +0000 UTC m=+1172.814589715" observedRunningTime="2025-11-22 10:58:10.536638772 +0000 UTC m=+1210.776083266" watchObservedRunningTime="2025-11-22 10:58:10.540740707 +0000 UTC m=+1210.780185191" Nov 22 10:58:10 crc kubenswrapper[4772]: I1122 10:58:10.852870 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.376250 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-clgxb" event={"ID":"1909bd69-e033-40b5-90f0-03e4450145fb","Type":"ContainerStarted","Data":"456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c"} Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.376384 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.382787 4772 generic.go:334] "Generic (PLEG): container finished" podID="6851a02e-cce0-44b6-af9a-f6ab928ddbe1" containerID="c6c1399307d1d09aaa50d08905f62ddc620cbd59c21efd5f9dba60c8c7eef11e" exitCode=0 Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.383597 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9846d" event={"ID":"6851a02e-cce0-44b6-af9a-f6ab928ddbe1","Type":"ContainerDied","Data":"c6c1399307d1d09aaa50d08905f62ddc620cbd59c21efd5f9dba60c8c7eef11e"} Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.410865 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-clgxb" podStartSLOduration=4.410844539 podStartE2EDuration="4.410844539s" podCreationTimestamp="2025-11-22 10:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:58:11.406777705 +0000 UTC m=+1211.646222199" watchObservedRunningTime="2025-11-22 10:58:11.410844539 +0000 UTC m=+1211.650289033" Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.436232 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7ca85a8-325b-476b-adac-a99e6417333a" path="/var/lib/kubelet/pods/b7ca85a8-325b-476b-adac-a99e6417333a/volumes" Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.437178 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" path="/var/lib/kubelet/pods/e02ee18c-0680-4728-a8a2-7c3734a3fc13/volumes" Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.680985 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fphqf" Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.796151 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6jbb\" (UniqueName: \"kubernetes.io/projected/ae389e1d-b9a4-4ba1-a746-023c65c68e15-kube-api-access-s6jbb\") pod \"ae389e1d-b9a4-4ba1-a746-023c65c68e15\" (UID: \"ae389e1d-b9a4-4ba1-a746-023c65c68e15\") " Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.800265 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae389e1d-b9a4-4ba1-a746-023c65c68e15-kube-api-access-s6jbb" (OuterVolumeSpecName: "kube-api-access-s6jbb") pod "ae389e1d-b9a4-4ba1-a746-023c65c68e15" (UID: "ae389e1d-b9a4-4ba1-a746-023c65c68e15"). InnerVolumeSpecName "kube-api-access-s6jbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:11 crc kubenswrapper[4772]: I1122 10:58:11.898929 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6jbb\" (UniqueName: \"kubernetes.io/projected/ae389e1d-b9a4-4ba1-a746-023c65c68e15-kube-api-access-s6jbb\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.393458 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4f74827a-8354-492b-b09d-350768ba912d","Type":"ContainerStarted","Data":"56a525a9356c41405cf6232508a4af9e3b589cb3a12221c13f572a5936890d76"} Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.393516 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4f74827a-8354-492b-b09d-350768ba912d","Type":"ContainerStarted","Data":"bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409"} Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.395397 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fphqf" event={"ID":"ae389e1d-b9a4-4ba1-a746-023c65c68e15","Type":"ContainerDied","Data":"4a48ad2ba0a731b2b4abe624621c4d8450a54cae889b5246d306fa8377112198"} Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.395443 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a48ad2ba0a731b2b4abe624621c4d8450a54cae889b5246d306fa8377112198" Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.395532 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fphqf" Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.434261 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.689504309 podStartE2EDuration="4.43423781s" podCreationTimestamp="2025-11-22 10:58:08 +0000 UTC" firstStartedPulling="2025-11-22 10:58:09.52027721 +0000 UTC m=+1209.759721714" lastFinishedPulling="2025-11-22 10:58:11.265010721 +0000 UTC m=+1211.504455215" observedRunningTime="2025-11-22 10:58:12.422310556 +0000 UTC m=+1212.661755050" watchObservedRunningTime="2025-11-22 10:58:12.43423781 +0000 UTC m=+1212.673682324" Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.678174 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9846d" Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.814904 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl28p\" (UniqueName: \"kubernetes.io/projected/6851a02e-cce0-44b6-af9a-f6ab928ddbe1-kube-api-access-kl28p\") pod \"6851a02e-cce0-44b6-af9a-f6ab928ddbe1\" (UID: \"6851a02e-cce0-44b6-af9a-f6ab928ddbe1\") " Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.820829 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6851a02e-cce0-44b6-af9a-f6ab928ddbe1-kube-api-access-kl28p" (OuterVolumeSpecName: "kube-api-access-kl28p") pod "6851a02e-cce0-44b6-af9a-f6ab928ddbe1" (UID: "6851a02e-cce0-44b6-af9a-f6ab928ddbe1"). InnerVolumeSpecName "kube-api-access-kl28p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:12 crc kubenswrapper[4772]: I1122 10:58:12.917200 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl28p\" (UniqueName: \"kubernetes.io/projected/6851a02e-cce0-44b6-af9a-f6ab928ddbe1-kube-api-access-kl28p\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:13 crc kubenswrapper[4772]: I1122 10:58:13.404154 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9846d" event={"ID":"6851a02e-cce0-44b6-af9a-f6ab928ddbe1","Type":"ContainerDied","Data":"12b8dfe7d137033215dad8a3bacbedbda951c54df3cd80c79ba18185c716b8ae"} Nov 22 10:58:13 crc kubenswrapper[4772]: I1122 10:58:13.404195 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9846d" Nov 22 10:58:13 crc kubenswrapper[4772]: I1122 10:58:13.404212 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b8dfe7d137033215dad8a3bacbedbda951c54df3cd80c79ba18185c716b8ae" Nov 22 10:58:13 crc kubenswrapper[4772]: I1122 10:58:13.404314 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.428313 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e71c-account-create-kr4nm"] Nov 22 10:58:14 crc kubenswrapper[4772]: E1122 10:58:14.428911 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerName="dnsmasq-dns" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.428925 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerName="dnsmasq-dns" Nov 22 10:58:14 crc kubenswrapper[4772]: E1122 10:58:14.428943 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerName="init" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.428950 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerName="init" Nov 22 10:58:14 crc kubenswrapper[4772]: E1122 10:58:14.428958 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ca85a8-325b-476b-adac-a99e6417333a" containerName="init" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.428965 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ca85a8-325b-476b-adac-a99e6417333a" containerName="init" Nov 22 10:58:14 crc kubenswrapper[4772]: E1122 10:58:14.428974 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae389e1d-b9a4-4ba1-a746-023c65c68e15" containerName="mariadb-database-create" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.428980 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae389e1d-b9a4-4ba1-a746-023c65c68e15" containerName="mariadb-database-create" Nov 22 10:58:14 crc kubenswrapper[4772]: E1122 10:58:14.428987 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6851a02e-cce0-44b6-af9a-f6ab928ddbe1" containerName="mariadb-database-create" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.428992 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6851a02e-cce0-44b6-af9a-f6ab928ddbe1" containerName="mariadb-database-create" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.429157 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6851a02e-cce0-44b6-af9a-f6ab928ddbe1" containerName="mariadb-database-create" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.429192 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02ee18c-0680-4728-a8a2-7c3734a3fc13" containerName="dnsmasq-dns" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.429201 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ca85a8-325b-476b-adac-a99e6417333a" containerName="init" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.429220 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae389e1d-b9a4-4ba1-a746-023c65c68e15" containerName="mariadb-database-create" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.429724 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e71c-account-create-kr4nm" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.431946 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.443564 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e71c-account-create-kr4nm"] Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.543204 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k47k\" (UniqueName: \"kubernetes.io/projected/866e2068-dd82-486f-b191-c39d54a86533-kube-api-access-6k47k\") pod \"glance-e71c-account-create-kr4nm\" (UID: \"866e2068-dd82-486f-b191-c39d54a86533\") " pod="openstack/glance-e71c-account-create-kr4nm" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.645728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k47k\" (UniqueName: \"kubernetes.io/projected/866e2068-dd82-486f-b191-c39d54a86533-kube-api-access-6k47k\") pod \"glance-e71c-account-create-kr4nm\" (UID: \"866e2068-dd82-486f-b191-c39d54a86533\") " pod="openstack/glance-e71c-account-create-kr4nm" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.683907 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k47k\" (UniqueName: \"kubernetes.io/projected/866e2068-dd82-486f-b191-c39d54a86533-kube-api-access-6k47k\") pod \"glance-e71c-account-create-kr4nm\" (UID: \"866e2068-dd82-486f-b191-c39d54a86533\") " pod="openstack/glance-e71c-account-create-kr4nm" Nov 22 10:58:14 crc kubenswrapper[4772]: I1122 10:58:14.747373 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e71c-account-create-kr4nm" Nov 22 10:58:15 crc kubenswrapper[4772]: I1122 10:58:15.180950 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e71c-account-create-kr4nm"] Nov 22 10:58:15 crc kubenswrapper[4772]: I1122 10:58:15.433093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e71c-account-create-kr4nm" event={"ID":"866e2068-dd82-486f-b191-c39d54a86533","Type":"ContainerStarted","Data":"e9e2f1aa5bf5dd9cccc1d42a14535044b27820cd63bab8e86a8a458fb129e183"} Nov 22 10:58:16 crc kubenswrapper[4772]: I1122 10:58:16.426219 4772 generic.go:334] "Generic (PLEG): container finished" podID="866e2068-dd82-486f-b191-c39d54a86533" containerID="9918d8c3fe9277db5faff872fe942b44186f5d9bd5dda4e9135f2737495ab101" exitCode=0 Nov 22 10:58:16 crc kubenswrapper[4772]: I1122 10:58:16.426270 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e71c-account-create-kr4nm" event={"ID":"866e2068-dd82-486f-b191-c39d54a86533","Type":"ContainerDied","Data":"9918d8c3fe9277db5faff872fe942b44186f5d9bd5dda4e9135f2737495ab101"} Nov 22 10:58:17 crc kubenswrapper[4772]: I1122 10:58:17.438077 4772 generic.go:334] "Generic (PLEG): container finished" podID="ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" containerID="78a2384ff1affb6057d0b82a09784a3ce79ec68a40fd733f4edb45b280fa6098" exitCode=0 Nov 22 10:58:17 crc kubenswrapper[4772]: I1122 10:58:17.438161 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4gl45" event={"ID":"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3","Type":"ContainerDied","Data":"78a2384ff1affb6057d0b82a09784a3ce79ec68a40fd733f4edb45b280fa6098"} Nov 22 10:58:17 crc kubenswrapper[4772]: I1122 10:58:17.802983 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e71c-account-create-kr4nm" Nov 22 10:58:17 crc kubenswrapper[4772]: I1122 10:58:17.899799 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k47k\" (UniqueName: \"kubernetes.io/projected/866e2068-dd82-486f-b191-c39d54a86533-kube-api-access-6k47k\") pod \"866e2068-dd82-486f-b191-c39d54a86533\" (UID: \"866e2068-dd82-486f-b191-c39d54a86533\") " Nov 22 10:58:17 crc kubenswrapper[4772]: I1122 10:58:17.900084 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:17 crc kubenswrapper[4772]: I1122 10:58:17.905355 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866e2068-dd82-486f-b191-c39d54a86533-kube-api-access-6k47k" (OuterVolumeSpecName: "kube-api-access-6k47k") pod "866e2068-dd82-486f-b191-c39d54a86533" (UID: "866e2068-dd82-486f-b191-c39d54a86533"). InnerVolumeSpecName "kube-api-access-6k47k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:17 crc kubenswrapper[4772]: I1122 10:58:17.907877 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"swift-storage-0\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " pod="openstack/swift-storage-0" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.001322 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k47k\" (UniqueName: \"kubernetes.io/projected/866e2068-dd82-486f-b191-c39d54a86533-kube-api-access-6k47k\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.185474 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.339250 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.405689 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-stbd7"] Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.406285 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" podUID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerName="dnsmasq-dns" containerID="cri-o://05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9" gracePeriod=10 Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.447009 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e71c-account-create-kr4nm" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.447298 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e71c-account-create-kr4nm" event={"ID":"866e2068-dd82-486f-b191-c39d54a86533","Type":"ContainerDied","Data":"e9e2f1aa5bf5dd9cccc1d42a14535044b27820cd63bab8e86a8a458fb129e183"} Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.447336 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e2f1aa5bf5dd9cccc1d42a14535044b27820cd63bab8e86a8a458fb129e183" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.779797 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 10:58:18 crc kubenswrapper[4772]: W1122 10:58:18.787429 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-9fc3b7ecf199fcf85e664ba067edbc8415aed4c149de9e4cb2b2d5bf7ab8f75d WatchSource:0}: Error finding container 9fc3b7ecf199fcf85e664ba067edbc8415aed4c149de9e4cb2b2d5bf7ab8f75d: Status 404 returned error can't find the container with id 9fc3b7ecf199fcf85e664ba067edbc8415aed4c149de9e4cb2b2d5bf7ab8f75d Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.874644 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.921293 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7a1c-account-create-x6bkv"] Nov 22 10:58:18 crc kubenswrapper[4772]: E1122 10:58:18.921640 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" containerName="swift-ring-rebalance" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.921657 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" containerName="swift-ring-rebalance" Nov 22 10:58:18 crc kubenswrapper[4772]: E1122 10:58:18.921695 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866e2068-dd82-486f-b191-c39d54a86533" containerName="mariadb-account-create" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.921701 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="866e2068-dd82-486f-b191-c39d54a86533" containerName="mariadb-account-create" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.921985 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="866e2068-dd82-486f-b191-c39d54a86533" containerName="mariadb-account-create" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.922012 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" containerName="swift-ring-rebalance" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.922761 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a1c-account-create-x6bkv" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.926247 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 22 10:58:18 crc kubenswrapper[4772]: I1122 10:58:18.928921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7a1c-account-create-x6bkv"] Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.019301 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-dispersionconf\") pod \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.019353 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-swiftconf\") pod \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.019455 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbdbv\" (UniqueName: \"kubernetes.io/projected/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-kube-api-access-zbdbv\") pod \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.019524 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-ring-data-devices\") pod \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.019560 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-combined-ca-bundle\") pod \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.019583 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-scripts\") pod \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.019602 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-etc-swift\") pod \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\" (UID: \"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.020567 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" (UID: "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.020654 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" (UID: "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.024787 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-kube-api-access-zbdbv" (OuterVolumeSpecName: "kube-api-access-zbdbv") pod "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" (UID: "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3"). InnerVolumeSpecName "kube-api-access-zbdbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.026829 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.029663 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" (UID: "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.040551 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-scripts" (OuterVolumeSpecName: "scripts") pod "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" (UID: "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.043373 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" (UID: "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.063197 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" (UID: "ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121710 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676pw\" (UniqueName: \"kubernetes.io/projected/f0a6ce78-ec31-4452-8a8b-e07a29d72200-kube-api-access-676pw\") pod \"keystone-7a1c-account-create-x6bkv\" (UID: \"f0a6ce78-ec31-4452-8a8b-e07a29d72200\") " pod="openstack/keystone-7a1c-account-create-x6bkv" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121776 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbdbv\" (UniqueName: \"kubernetes.io/projected/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-kube-api-access-zbdbv\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121791 4772 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121848 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121882 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121891 4772 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121900 4772 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.121909 4772 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.222853 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-config\") pod \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.222926 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz677\" (UniqueName: \"kubernetes.io/projected/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-kube-api-access-tz677\") pod \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.222971 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-dns-svc\") pod \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\" (UID: \"9cd13b77-de2f-4a35-8e87-88f2ed802fc5\") " Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.223408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676pw\" (UniqueName: \"kubernetes.io/projected/f0a6ce78-ec31-4452-8a8b-e07a29d72200-kube-api-access-676pw\") pod \"keystone-7a1c-account-create-x6bkv\" (UID: \"f0a6ce78-ec31-4452-8a8b-e07a29d72200\") " pod="openstack/keystone-7a1c-account-create-x6bkv" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.228449 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-kube-api-access-tz677" (OuterVolumeSpecName: "kube-api-access-tz677") pod "9cd13b77-de2f-4a35-8e87-88f2ed802fc5" (UID: "9cd13b77-de2f-4a35-8e87-88f2ed802fc5"). InnerVolumeSpecName "kube-api-access-tz677". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.235035 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8972-account-create-7lpfn"] Nov 22 10:58:19 crc kubenswrapper[4772]: E1122 10:58:19.235424 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerName="dnsmasq-dns" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.235442 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerName="dnsmasq-dns" Nov 22 10:58:19 crc kubenswrapper[4772]: E1122 10:58:19.235563 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerName="init" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.235604 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerName="init" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.235770 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerName="dnsmasq-dns" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.236306 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8972-account-create-7lpfn" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.237923 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.244463 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8972-account-create-7lpfn"] Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.261225 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676pw\" (UniqueName: \"kubernetes.io/projected/f0a6ce78-ec31-4452-8a8b-e07a29d72200-kube-api-access-676pw\") pod \"keystone-7a1c-account-create-x6bkv\" (UID: \"f0a6ce78-ec31-4452-8a8b-e07a29d72200\") " pod="openstack/keystone-7a1c-account-create-x6bkv" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.273453 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-config" (OuterVolumeSpecName: "config") pod "9cd13b77-de2f-4a35-8e87-88f2ed802fc5" (UID: "9cd13b77-de2f-4a35-8e87-88f2ed802fc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.273596 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cd13b77-de2f-4a35-8e87-88f2ed802fc5" (UID: "9cd13b77-de2f-4a35-8e87-88f2ed802fc5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.324822 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.325697 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz677\" (UniqueName: \"kubernetes.io/projected/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-kube-api-access-tz677\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.325759 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cd13b77-de2f-4a35-8e87-88f2ed802fc5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.327121 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a1c-account-create-x6bkv" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.427467 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbpw\" (UniqueName: \"kubernetes.io/projected/a5abd352-83cb-40b7-9f68-c5635c1b5066-kube-api-access-gnbpw\") pod \"placement-8972-account-create-7lpfn\" (UID: \"a5abd352-83cb-40b7-9f68-c5635c1b5066\") " pod="openstack/placement-8972-account-create-7lpfn" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.460721 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" probeResult="failure" output=< Nov 22 10:58:19 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 10:58:19 crc kubenswrapper[4772]: > Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.461560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"9fc3b7ecf199fcf85e664ba067edbc8415aed4c149de9e4cb2b2d5bf7ab8f75d"} Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.475005 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4gl45" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.475984 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4gl45" event={"ID":"ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3","Type":"ContainerDied","Data":"4d6b9ded73c873b9715f17d1d4c70f230b89ca04a799f214e6b83fd49ee5b6c6"} Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.476017 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d6b9ded73c873b9715f17d1d4c70f230b89ca04a799f214e6b83fd49ee5b6c6" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.479548 4772 generic.go:334] "Generic (PLEG): container finished" podID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" containerID="05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9" exitCode=0 Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.479592 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" event={"ID":"9cd13b77-de2f-4a35-8e87-88f2ed802fc5","Type":"ContainerDied","Data":"05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9"} Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.479620 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" event={"ID":"9cd13b77-de2f-4a35-8e87-88f2ed802fc5","Type":"ContainerDied","Data":"41aa7a4e28fa9f3d55f240ceeda1aba389c4c2584048c846261290e883d94523"} Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.479642 4772 scope.go:117] "RemoveContainer" containerID="05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.479636 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-stbd7" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.511213 4772 scope.go:117] "RemoveContainer" containerID="f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.516197 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-stbd7"] Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.529527 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-stbd7"] Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.530346 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbpw\" (UniqueName: \"kubernetes.io/projected/a5abd352-83cb-40b7-9f68-c5635c1b5066-kube-api-access-gnbpw\") pod \"placement-8972-account-create-7lpfn\" (UID: \"a5abd352-83cb-40b7-9f68-c5635c1b5066\") " pod="openstack/placement-8972-account-create-7lpfn" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.551587 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mr98f"] Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.552649 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.553556 4772 scope.go:117] "RemoveContainer" containerID="05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9" Nov 22 10:58:19 crc kubenswrapper[4772]: E1122 10:58:19.554189 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9\": container with ID starting with 05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9 not found: ID does not exist" containerID="05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.554242 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9"} err="failed to get container status \"05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9\": rpc error: code = NotFound desc = could not find container \"05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9\": container with ID starting with 05939232b78f9a0da99da6b01ef1cc5c610734a7c3e4b148c04b691e00fee0f9 not found: ID does not exist" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.554282 4772 scope.go:117] "RemoveContainer" containerID="f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.554977 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbpw\" (UniqueName: \"kubernetes.io/projected/a5abd352-83cb-40b7-9f68-c5635c1b5066-kube-api-access-gnbpw\") pod \"placement-8972-account-create-7lpfn\" (UID: \"a5abd352-83cb-40b7-9f68-c5635c1b5066\") " pod="openstack/placement-8972-account-create-7lpfn" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.557503 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.557560 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9hdhd" Nov 22 10:58:19 crc kubenswrapper[4772]: E1122 10:58:19.558647 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8\": container with ID starting with f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8 not found: ID does not exist" containerID="f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.558689 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8"} err="failed to get container status \"f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8\": rpc error: code = NotFound desc = could not find container \"f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8\": container with ID starting with f713e9f1704792ab1074d7d9ded0eb4b2734a0a959aa4497cbdb88686cc8dfd8 not found: ID does not exist" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.562593 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mr98f"] Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.738766 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-combined-ca-bundle\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.738847 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vl2\" (UniqueName: \"kubernetes.io/projected/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-kube-api-access-m5vl2\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.738912 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-config-data\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.738962 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-db-sync-config-data\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.806801 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7a1c-account-create-x6bkv"] Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.840403 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vl2\" (UniqueName: \"kubernetes.io/projected/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-kube-api-access-m5vl2\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.840561 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-config-data\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.840660 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-db-sync-config-data\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.840766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-combined-ca-bundle\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.844834 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-db-sync-config-data\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.852622 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8972-account-create-7lpfn" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.854073 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-combined-ca-bundle\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.855577 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-config-data\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.857794 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vl2\" (UniqueName: \"kubernetes.io/projected/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-kube-api-access-m5vl2\") pod \"glance-db-sync-mr98f\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:19 crc kubenswrapper[4772]: I1122 10:58:19.869696 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mr98f" Nov 22 10:58:20 crc kubenswrapper[4772]: W1122 10:58:20.101204 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0a6ce78_ec31_4452_8a8b_e07a29d72200.slice/crio-13314e2d5fade18f02f47fc53076c94b1551d85c4efb4cb739d4b1905b82e005 WatchSource:0}: Error finding container 13314e2d5fade18f02f47fc53076c94b1551d85c4efb4cb739d4b1905b82e005: Status 404 returned error can't find the container with id 13314e2d5fade18f02f47fc53076c94b1551d85c4efb4cb739d4b1905b82e005 Nov 22 10:58:20 crc kubenswrapper[4772]: I1122 10:58:20.487636 4772 generic.go:334] "Generic (PLEG): container finished" podID="f0a6ce78-ec31-4452-8a8b-e07a29d72200" containerID="eac157ae6a2d31f928615483a5dac165500ea2cfda55fe2b11467adfdab1e36b" exitCode=0 Nov 22 10:58:20 crc kubenswrapper[4772]: I1122 10:58:20.487688 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a1c-account-create-x6bkv" event={"ID":"f0a6ce78-ec31-4452-8a8b-e07a29d72200","Type":"ContainerDied","Data":"eac157ae6a2d31f928615483a5dac165500ea2cfda55fe2b11467adfdab1e36b"} Nov 22 10:58:20 crc kubenswrapper[4772]: I1122 10:58:20.488015 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a1c-account-create-x6bkv" event={"ID":"f0a6ce78-ec31-4452-8a8b-e07a29d72200","Type":"ContainerStarted","Data":"13314e2d5fade18f02f47fc53076c94b1551d85c4efb4cb739d4b1905b82e005"} Nov 22 10:58:20 crc kubenswrapper[4772]: I1122 10:58:20.610355 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8972-account-create-7lpfn"] Nov 22 10:58:20 crc kubenswrapper[4772]: W1122 10:58:20.615242 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5abd352_83cb_40b7_9f68_c5635c1b5066.slice/crio-4de3ec0ee5a339c59ed8543fd162dad7e42f45f2e98c69422ec5a6d81ebe15a9 WatchSource:0}: Error finding container 4de3ec0ee5a339c59ed8543fd162dad7e42f45f2e98c69422ec5a6d81ebe15a9: Status 404 returned error can't find the container with id 4de3ec0ee5a339c59ed8543fd162dad7e42f45f2e98c69422ec5a6d81ebe15a9 Nov 22 10:58:20 crc kubenswrapper[4772]: I1122 10:58:20.617806 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mr98f"] Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.463324 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd13b77-de2f-4a35-8e87-88f2ed802fc5" path="/var/lib/kubelet/pods/9cd13b77-de2f-4a35-8e87-88f2ed802fc5/volumes" Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.507254 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mr98f" event={"ID":"1c352191-5a61-4dc6-ba16-6c82cb0fdedf","Type":"ContainerStarted","Data":"44177f0432bc00beb55f9bb4ca979c22c959548480cf05acd0c31ca53f378d66"} Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.509135 4772 generic.go:334] "Generic (PLEG): container finished" podID="a5abd352-83cb-40b7-9f68-c5635c1b5066" containerID="56a8b954082c6812db97a25ba9fd58695f46ad8ff64c97e92509686b817d5837" exitCode=0 Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.509213 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8972-account-create-7lpfn" event={"ID":"a5abd352-83cb-40b7-9f68-c5635c1b5066","Type":"ContainerDied","Data":"56a8b954082c6812db97a25ba9fd58695f46ad8ff64c97e92509686b817d5837"} Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.509255 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8972-account-create-7lpfn" event={"ID":"a5abd352-83cb-40b7-9f68-c5635c1b5066","Type":"ContainerStarted","Data":"4de3ec0ee5a339c59ed8543fd162dad7e42f45f2e98c69422ec5a6d81ebe15a9"} Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.519244 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"e643a9463572f69ee79ae91f043796a6db50894ddf2c847ea4435b5d3f1f8d4b"} Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.519295 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"05273857f1de6f10d05b451d131edb562b0d717aa0fb09e91569b386fad68432"} Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.519307 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"6c148c107a290e32467067529f8845e2cb396a10c79f269871d8b6dfe85c8538"} Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.857035 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a1c-account-create-x6bkv" Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.973895 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-676pw\" (UniqueName: \"kubernetes.io/projected/f0a6ce78-ec31-4452-8a8b-e07a29d72200-kube-api-access-676pw\") pod \"f0a6ce78-ec31-4452-8a8b-e07a29d72200\" (UID: \"f0a6ce78-ec31-4452-8a8b-e07a29d72200\") " Nov 22 10:58:21 crc kubenswrapper[4772]: I1122 10:58:21.979541 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a6ce78-ec31-4452-8a8b-e07a29d72200-kube-api-access-676pw" (OuterVolumeSpecName: "kube-api-access-676pw") pod "f0a6ce78-ec31-4452-8a8b-e07a29d72200" (UID: "f0a6ce78-ec31-4452-8a8b-e07a29d72200"). InnerVolumeSpecName "kube-api-access-676pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.075955 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-676pw\" (UniqueName: \"kubernetes.io/projected/f0a6ce78-ec31-4452-8a8b-e07a29d72200-kube-api-access-676pw\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.527964 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"6aeb1397a245f1d928583c71247392a935b287c4678959e266ee42e4285547dd"} Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.534904 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a1c-account-create-x6bkv" Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.535401 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a1c-account-create-x6bkv" event={"ID":"f0a6ce78-ec31-4452-8a8b-e07a29d72200","Type":"ContainerDied","Data":"13314e2d5fade18f02f47fc53076c94b1551d85c4efb4cb739d4b1905b82e005"} Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.535462 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13314e2d5fade18f02f47fc53076c94b1551d85c4efb4cb739d4b1905b82e005" Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.868147 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8972-account-create-7lpfn" Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.994855 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnbpw\" (UniqueName: \"kubernetes.io/projected/a5abd352-83cb-40b7-9f68-c5635c1b5066-kube-api-access-gnbpw\") pod \"a5abd352-83cb-40b7-9f68-c5635c1b5066\" (UID: \"a5abd352-83cb-40b7-9f68-c5635c1b5066\") " Nov 22 10:58:22 crc kubenswrapper[4772]: I1122 10:58:22.999606 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5abd352-83cb-40b7-9f68-c5635c1b5066-kube-api-access-gnbpw" (OuterVolumeSpecName: "kube-api-access-gnbpw") pod "a5abd352-83cb-40b7-9f68-c5635c1b5066" (UID: "a5abd352-83cb-40b7-9f68-c5635c1b5066"). InnerVolumeSpecName "kube-api-access-gnbpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:23 crc kubenswrapper[4772]: I1122 10:58:23.097233 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnbpw\" (UniqueName: \"kubernetes.io/projected/a5abd352-83cb-40b7-9f68-c5635c1b5066-kube-api-access-gnbpw\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:23 crc kubenswrapper[4772]: I1122 10:58:23.551058 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8972-account-create-7lpfn" event={"ID":"a5abd352-83cb-40b7-9f68-c5635c1b5066","Type":"ContainerDied","Data":"4de3ec0ee5a339c59ed8543fd162dad7e42f45f2e98c69422ec5a6d81ebe15a9"} Nov 22 10:58:23 crc kubenswrapper[4772]: I1122 10:58:23.551403 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4de3ec0ee5a339c59ed8543fd162dad7e42f45f2e98c69422ec5a6d81ebe15a9" Nov 22 10:58:23 crc kubenswrapper[4772]: I1122 10:58:23.551460 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8972-account-create-7lpfn" Nov 22 10:58:23 crc kubenswrapper[4772]: I1122 10:58:23.557885 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"dd0dd2fc38d88b49b28baf152ee2dced29cbe3336d9498d4ade9ee3c9adf12ee"} Nov 22 10:58:23 crc kubenswrapper[4772]: I1122 10:58:23.557914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"71199f24f24db6b2a98a516ab206a62b13cd49e1da1c3a11e4c911c568e4f32b"} Nov 22 10:58:23 crc kubenswrapper[4772]: I1122 10:58:23.557925 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"d99d87346c3370f3f3fba6fb00e3db55bfc3eab865e86ac6c50b67b5247c7837"} Nov 22 10:58:24 crc kubenswrapper[4772]: I1122 10:58:24.049282 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 10:58:24 crc kubenswrapper[4772]: I1122 10:58:24.582609 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" probeResult="failure" output=< Nov 22 10:58:24 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 10:58:24 crc kubenswrapper[4772]: > Nov 22 10:58:24 crc kubenswrapper[4772]: I1122 10:58:24.590118 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"04dbfb695ea067220096d958b7c9f722332cd1273836354417eb1f3cad0efc67"} Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.073241 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.348269 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6b5c6"] Nov 22 10:58:26 crc kubenswrapper[4772]: E1122 10:58:26.348676 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a6ce78-ec31-4452-8a8b-e07a29d72200" containerName="mariadb-account-create" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.348701 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a6ce78-ec31-4452-8a8b-e07a29d72200" containerName="mariadb-account-create" Nov 22 10:58:26 crc kubenswrapper[4772]: E1122 10:58:26.348730 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5abd352-83cb-40b7-9f68-c5635c1b5066" containerName="mariadb-account-create" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.348739 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5abd352-83cb-40b7-9f68-c5635c1b5066" containerName="mariadb-account-create" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.348938 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5abd352-83cb-40b7-9f68-c5635c1b5066" containerName="mariadb-account-create" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.348967 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a6ce78-ec31-4452-8a8b-e07a29d72200" containerName="mariadb-account-create" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.349798 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b5c6" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.360790 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6b5c6"] Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.408329 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.451653 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmx2q\" (UniqueName: \"kubernetes.io/projected/d583d5fa-33d5-4225-8d1b-2f389e2f35ea-kube-api-access-pmx2q\") pod \"cinder-db-create-6b5c6\" (UID: \"d583d5fa-33d5-4225-8d1b-2f389e2f35ea\") " pod="openstack/cinder-db-create-6b5c6" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.458162 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jqkv7"] Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.459231 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqkv7" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.467978 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqkv7"] Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.553643 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmx2q\" (UniqueName: \"kubernetes.io/projected/d583d5fa-33d5-4225-8d1b-2f389e2f35ea-kube-api-access-pmx2q\") pod \"cinder-db-create-6b5c6\" (UID: \"d583d5fa-33d5-4225-8d1b-2f389e2f35ea\") " pod="openstack/cinder-db-create-6b5c6" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.554931 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmkf\" (UniqueName: \"kubernetes.io/projected/c81b0bbf-5434-4a0d-94b1-8d2129c846b8-kube-api-access-jhmkf\") pod \"barbican-db-create-jqkv7\" (UID: \"c81b0bbf-5434-4a0d-94b1-8d2129c846b8\") " pod="openstack/barbican-db-create-jqkv7" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.584196 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmx2q\" (UniqueName: \"kubernetes.io/projected/d583d5fa-33d5-4225-8d1b-2f389e2f35ea-kube-api-access-pmx2q\") pod \"cinder-db-create-6b5c6\" (UID: \"d583d5fa-33d5-4225-8d1b-2f389e2f35ea\") " pod="openstack/cinder-db-create-6b5c6" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.656427 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhmkf\" (UniqueName: \"kubernetes.io/projected/c81b0bbf-5434-4a0d-94b1-8d2129c846b8-kube-api-access-jhmkf\") pod \"barbican-db-create-jqkv7\" (UID: \"c81b0bbf-5434-4a0d-94b1-8d2129c846b8\") " pod="openstack/barbican-db-create-jqkv7" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.671674 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-46ff9"] Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.672931 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-46ff9" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.680316 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-46ff9"] Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.687584 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b5c6" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.700654 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhmkf\" (UniqueName: \"kubernetes.io/projected/c81b0bbf-5434-4a0d-94b1-8d2129c846b8-kube-api-access-jhmkf\") pod \"barbican-db-create-jqkv7\" (UID: \"c81b0bbf-5434-4a0d-94b1-8d2129c846b8\") " pod="openstack/barbican-db-create-jqkv7" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.758589 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5psz2\" (UniqueName: \"kubernetes.io/projected/3c8ec99a-e207-4183-b904-60921f754abf-kube-api-access-5psz2\") pod \"neutron-db-create-46ff9\" (UID: \"3c8ec99a-e207-4183-b904-60921f754abf\") " pod="openstack/neutron-db-create-46ff9" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.778273 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqkv7" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.841034 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hqp6h"] Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.842326 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.845148 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8vg6" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.845408 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.845615 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.846515 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.849536 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hqp6h"] Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.859805 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5psz2\" (UniqueName: \"kubernetes.io/projected/3c8ec99a-e207-4183-b904-60921f754abf-kube-api-access-5psz2\") pod \"neutron-db-create-46ff9\" (UID: \"3c8ec99a-e207-4183-b904-60921f754abf\") " pod="openstack/neutron-db-create-46ff9" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.875720 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5psz2\" (UniqueName: \"kubernetes.io/projected/3c8ec99a-e207-4183-b904-60921f754abf-kube-api-access-5psz2\") pod \"neutron-db-create-46ff9\" (UID: \"3c8ec99a-e207-4183-b904-60921f754abf\") " pod="openstack/neutron-db-create-46ff9" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.960943 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-config-data\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.961118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-combined-ca-bundle\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.961169 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njjmm\" (UniqueName: \"kubernetes.io/projected/f9e8aace-e7f4-4ca7-902b-c20672965240-kube-api-access-njjmm\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:26 crc kubenswrapper[4772]: I1122 10:58:26.994225 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-46ff9" Nov 22 10:58:27 crc kubenswrapper[4772]: I1122 10:58:27.063040 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-combined-ca-bundle\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:27 crc kubenswrapper[4772]: I1122 10:58:27.063146 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njjmm\" (UniqueName: \"kubernetes.io/projected/f9e8aace-e7f4-4ca7-902b-c20672965240-kube-api-access-njjmm\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:27 crc kubenswrapper[4772]: I1122 10:58:27.063200 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-config-data\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:27 crc kubenswrapper[4772]: I1122 10:58:27.067392 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-combined-ca-bundle\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:27 crc kubenswrapper[4772]: I1122 10:58:27.076527 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-config-data\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:27 crc kubenswrapper[4772]: I1122 10:58:27.079412 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjmm\" (UniqueName: \"kubernetes.io/projected/f9e8aace-e7f4-4ca7-902b-c20672965240-kube-api-access-njjmm\") pod \"keystone-db-sync-hqp6h\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:27 crc kubenswrapper[4772]: I1122 10:58:27.228463 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.439240 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" probeResult="failure" output=< Nov 22 10:58:29 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 10:58:29 crc kubenswrapper[4772]: > Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.459776 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.461706 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.687085 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-267ms-config-fmvhg"] Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.688181 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.692685 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.746584 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-267ms-config-fmvhg"] Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.810302 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-additional-scripts\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.810357 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-scripts\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.810391 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.810466 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhq42\" (UniqueName: \"kubernetes.io/projected/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-kube-api-access-qhq42\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.810628 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-log-ovn\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.810786 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run-ovn\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912466 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run-ovn\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912568 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-additional-scripts\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912594 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-scripts\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912620 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912648 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhq42\" (UniqueName: \"kubernetes.io/projected/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-kube-api-access-qhq42\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912690 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-log-ovn\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912826 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-log-ovn\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912822 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run-ovn\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.912875 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.913659 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-additional-scripts\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.914884 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-scripts\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:29 crc kubenswrapper[4772]: I1122 10:58:29.934045 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhq42\" (UniqueName: \"kubernetes.io/projected/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-kube-api-access-qhq42\") pod \"ovn-controller-267ms-config-fmvhg\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:30 crc kubenswrapper[4772]: I1122 10:58:30.014419 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:34 crc kubenswrapper[4772]: I1122 10:58:34.432873 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" probeResult="failure" output=< Nov 22 10:58:34 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 10:58:34 crc kubenswrapper[4772]: > Nov 22 10:58:39 crc kubenswrapper[4772]: I1122 10:58:39.437453 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" probeResult="failure" output=< Nov 22 10:58:39 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 10:58:39 crc kubenswrapper[4772]: > Nov 22 10:58:40 crc kubenswrapper[4772]: E1122 10:58:40.773721 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 22 10:58:40 crc kubenswrapper[4772]: E1122 10:58:40.775254 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5vl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mr98f_openstack(1c352191-5a61-4dc6-ba16-6c82cb0fdedf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 10:58:40 crc kubenswrapper[4772]: E1122 10:58:40.776567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mr98f" podUID="1c352191-5a61-4dc6-ba16-6c82cb0fdedf" Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.392119 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqkv7"] Nov 22 10:58:41 crc kubenswrapper[4772]: W1122 10:58:41.516162 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd583d5fa_33d5_4225_8d1b_2f389e2f35ea.slice/crio-3293de38b2298e11a6c78ae45c65822bca201f0ae056c630fd023be0a71ebd09 WatchSource:0}: Error finding container 3293de38b2298e11a6c78ae45c65822bca201f0ae056c630fd023be0a71ebd09: Status 404 returned error can't find the container with id 3293de38b2298e11a6c78ae45c65822bca201f0ae056c630fd023be0a71ebd09 Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.524752 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6b5c6"] Nov 22 10:58:41 crc kubenswrapper[4772]: W1122 10:58:41.526595 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9e8aace_e7f4_4ca7_902b_c20672965240.slice/crio-7a8c727b07bd7558e4b60f34689841b1702505af5e4ad47a7422b232b33e93dd WatchSource:0}: Error finding container 7a8c727b07bd7558e4b60f34689841b1702505af5e4ad47a7422b232b33e93dd: Status 404 returned error can't find the container with id 7a8c727b07bd7558e4b60f34689841b1702505af5e4ad47a7422b232b33e93dd Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.535845 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-46ff9"] Nov 22 10:58:41 crc kubenswrapper[4772]: W1122 10:58:41.538094 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c8ec99a_e207_4183_b904_60921f754abf.slice/crio-77a1da976ed2c4cb02b1842d08af2cd03e3fb0a3eca08ee9d92a5a6bb7bafdd8 WatchSource:0}: Error finding container 77a1da976ed2c4cb02b1842d08af2cd03e3fb0a3eca08ee9d92a5a6bb7bafdd8: Status 404 returned error can't find the container with id 77a1da976ed2c4cb02b1842d08af2cd03e3fb0a3eca08ee9d92a5a6bb7bafdd8 Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.545189 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hqp6h"] Nov 22 10:58:41 crc kubenswrapper[4772]: W1122 10:58:41.551064 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod178ad79e_55ab_4c85_958a_2b9c60c0dfa3.slice/crio-6dbd626028cbdaccfeec608154ea43234fe5eaa8927b741a3e6226da7efa1166 WatchSource:0}: Error finding container 6dbd626028cbdaccfeec608154ea43234fe5eaa8927b741a3e6226da7efa1166: Status 404 returned error can't find the container with id 6dbd626028cbdaccfeec608154ea43234fe5eaa8927b741a3e6226da7efa1166 Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.553472 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-267ms-config-fmvhg"] Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.736162 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqkv7" event={"ID":"c81b0bbf-5434-4a0d-94b1-8d2129c846b8","Type":"ContainerStarted","Data":"1efd25228bdca15ca962ec5d158a7c336f5ca3b27f4d790623d3e14efd466569"} Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.737836 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqp6h" event={"ID":"f9e8aace-e7f4-4ca7-902b-c20672965240","Type":"ContainerStarted","Data":"7a8c727b07bd7558e4b60f34689841b1702505af5e4ad47a7422b232b33e93dd"} Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.738747 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6b5c6" event={"ID":"d583d5fa-33d5-4225-8d1b-2f389e2f35ea","Type":"ContainerStarted","Data":"3293de38b2298e11a6c78ae45c65822bca201f0ae056c630fd023be0a71ebd09"} Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.739580 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-46ff9" event={"ID":"3c8ec99a-e207-4183-b904-60921f754abf","Type":"ContainerStarted","Data":"77a1da976ed2c4cb02b1842d08af2cd03e3fb0a3eca08ee9d92a5a6bb7bafdd8"} Nov 22 10:58:41 crc kubenswrapper[4772]: I1122 10:58:41.740714 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms-config-fmvhg" event={"ID":"178ad79e-55ab-4c85-958a-2b9c60c0dfa3","Type":"ContainerStarted","Data":"6dbd626028cbdaccfeec608154ea43234fe5eaa8927b741a3e6226da7efa1166"} Nov 22 10:58:41 crc kubenswrapper[4772]: E1122 10:58:41.742604 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-mr98f" podUID="1c352191-5a61-4dc6-ba16-6c82cb0fdedf" Nov 22 10:58:43 crc kubenswrapper[4772]: E1122 10:58:43.210789 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-object:current-podified" Nov 22 10:58:43 crc kubenswrapper[4772]: E1122 10:58:43.211313 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:object-server,Image:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,Command:[/usr/bin/swift-object-server /etc/swift/object-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:object,HostPort:0,ContainerPort:6200,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b7h56h9dh94h67bh697h95h55hbh555h556h675h5fdh57dh579h5fbh64fh5c9h687hb6h678h5d4h549h54h98h8ch564h5bh5bch55dhc8hf8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl22s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(354e52a7-830a-43a1-ad15-a13fe2a07222): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 10:58:43 crc kubenswrapper[4772]: E1122 10:58:43.471709 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"object-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"object-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"rsync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"swift-recon-cron\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.760376 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6b5c6" event={"ID":"d583d5fa-33d5-4225-8d1b-2f389e2f35ea","Type":"ContainerStarted","Data":"cdc8584e7921c9e3e8e90db22f83100d41cfbf8055a0f4ffb89bf16b487bb3f9"} Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.762316 4772 generic.go:334] "Generic (PLEG): container finished" podID="3c8ec99a-e207-4183-b904-60921f754abf" containerID="317d3b7bdcfc600aafe9a478674845c1d53093f09801f5b8a951fd84549db2a1" exitCode=0 Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.762465 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-46ff9" event={"ID":"3c8ec99a-e207-4183-b904-60921f754abf","Type":"ContainerDied","Data":"317d3b7bdcfc600aafe9a478674845c1d53093f09801f5b8a951fd84549db2a1"} Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.772488 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"4bb4a8713445b470473fcae6ce67f357ca8b02ad81def05ee9cb94ebea5ebf50"} Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.775414 4772 generic.go:334] "Generic (PLEG): container finished" podID="178ad79e-55ab-4c85-958a-2b9c60c0dfa3" containerID="b7f72ba6b3c20b7717ac1f766f0474861db16d73d4738ef463c67b77caf605a8" exitCode=0 Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.775651 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms-config-fmvhg" event={"ID":"178ad79e-55ab-4c85-958a-2b9c60c0dfa3","Type":"ContainerDied","Data":"b7f72ba6b3c20b7717ac1f766f0474861db16d73d4738ef463c67b77caf605a8"} Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.778994 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqkv7" event={"ID":"c81b0bbf-5434-4a0d-94b1-8d2129c846b8","Type":"ContainerStarted","Data":"497163a333b7d91b1c734f1d6710d16007dc40d4076faa300e1f8ea20b04136f"} Nov 22 10:58:43 crc kubenswrapper[4772]: E1122 10:58:43.782963 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"object-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"rsync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"swift-recon-cron\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" Nov 22 10:58:43 crc kubenswrapper[4772]: I1122 10:58:43.863458 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-jqkv7" podStartSLOduration=17.863439004 podStartE2EDuration="17.863439004s" podCreationTimestamp="2025-11-22 10:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:58:43.862597992 +0000 UTC m=+1244.102042486" watchObservedRunningTime="2025-11-22 10:58:43.863439004 +0000 UTC m=+1244.102883488" Nov 22 10:58:44 crc kubenswrapper[4772]: I1122 10:58:44.436372 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-267ms" Nov 22 10:58:44 crc kubenswrapper[4772]: I1122 10:58:44.795090 4772 generic.go:334] "Generic (PLEG): container finished" podID="c81b0bbf-5434-4a0d-94b1-8d2129c846b8" containerID="497163a333b7d91b1c734f1d6710d16007dc40d4076faa300e1f8ea20b04136f" exitCode=0 Nov 22 10:58:44 crc kubenswrapper[4772]: I1122 10:58:44.795510 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqkv7" event={"ID":"c81b0bbf-5434-4a0d-94b1-8d2129c846b8","Type":"ContainerDied","Data":"497163a333b7d91b1c734f1d6710d16007dc40d4076faa300e1f8ea20b04136f"} Nov 22 10:58:44 crc kubenswrapper[4772]: I1122 10:58:44.801160 4772 generic.go:334] "Generic (PLEG): container finished" podID="d583d5fa-33d5-4225-8d1b-2f389e2f35ea" containerID="cdc8584e7921c9e3e8e90db22f83100d41cfbf8055a0f4ffb89bf16b487bb3f9" exitCode=0 Nov 22 10:58:44 crc kubenswrapper[4772]: I1122 10:58:44.801246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6b5c6" event={"ID":"d583d5fa-33d5-4225-8d1b-2f389e2f35ea","Type":"ContainerDied","Data":"cdc8584e7921c9e3e8e90db22f83100d41cfbf8055a0f4ffb89bf16b487bb3f9"} Nov 22 10:58:44 crc kubenswrapper[4772]: E1122 10:58:44.816014 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"object-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"object-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"rsync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\", failed to \"StartContainer\" for \"swift-recon-cron\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-object:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" Nov 22 10:58:45 crc kubenswrapper[4772]: E1122 10:58:45.156712 4772 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.715857 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-46ff9" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.726872 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.866960 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-scripts\") pod \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.867083 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5psz2\" (UniqueName: \"kubernetes.io/projected/3c8ec99a-e207-4183-b904-60921f754abf-kube-api-access-5psz2\") pod \"3c8ec99a-e207-4183-b904-60921f754abf\" (UID: \"3c8ec99a-e207-4183-b904-60921f754abf\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.867127 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhq42\" (UniqueName: \"kubernetes.io/projected/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-kube-api-access-qhq42\") pod \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.868275 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-log-ovn\") pod \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.868306 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run\") pod \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.868338 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run-ovn\") pod \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.868362 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-additional-scripts\") pod \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\" (UID: \"178ad79e-55ab-4c85-958a-2b9c60c0dfa3\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.868664 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-scripts" (OuterVolumeSpecName: "scripts") pod "178ad79e-55ab-4c85-958a-2b9c60c0dfa3" (UID: "178ad79e-55ab-4c85-958a-2b9c60c0dfa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.868755 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run" (OuterVolumeSpecName: "var-run") pod "178ad79e-55ab-4c85-958a-2b9c60c0dfa3" (UID: "178ad79e-55ab-4c85-958a-2b9c60c0dfa3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.869031 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "178ad79e-55ab-4c85-958a-2b9c60c0dfa3" (UID: "178ad79e-55ab-4c85-958a-2b9c60c0dfa3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.869036 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "178ad79e-55ab-4c85-958a-2b9c60c0dfa3" (UID: "178ad79e-55ab-4c85-958a-2b9c60c0dfa3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.869218 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "178ad79e-55ab-4c85-958a-2b9c60c0dfa3" (UID: "178ad79e-55ab-4c85-958a-2b9c60c0dfa3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.869992 4772 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.870023 4772 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.870036 4772 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.870070 4772 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.870084 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.871700 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8ec99a-e207-4183-b904-60921f754abf-kube-api-access-5psz2" (OuterVolumeSpecName: "kube-api-access-5psz2") pod "3c8ec99a-e207-4183-b904-60921f754abf" (UID: "3c8ec99a-e207-4183-b904-60921f754abf"). InnerVolumeSpecName "kube-api-access-5psz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.871733 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-fmvhg" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.871750 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms-config-fmvhg" event={"ID":"178ad79e-55ab-4c85-958a-2b9c60c0dfa3","Type":"ContainerDied","Data":"6dbd626028cbdaccfeec608154ea43234fe5eaa8927b741a3e6226da7efa1166"} Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.871917 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dbd626028cbdaccfeec608154ea43234fe5eaa8927b741a3e6226da7efa1166" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.873381 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-kube-api-access-qhq42" (OuterVolumeSpecName: "kube-api-access-qhq42") pod "178ad79e-55ab-4c85-958a-2b9c60c0dfa3" (UID: "178ad79e-55ab-4c85-958a-2b9c60c0dfa3"). InnerVolumeSpecName "kube-api-access-qhq42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.873756 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqkv7" event={"ID":"c81b0bbf-5434-4a0d-94b1-8d2129c846b8","Type":"ContainerDied","Data":"1efd25228bdca15ca962ec5d158a7c336f5ca3b27f4d790623d3e14efd466569"} Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.873786 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1efd25228bdca15ca962ec5d158a7c336f5ca3b27f4d790623d3e14efd466569" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.875655 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6b5c6" event={"ID":"d583d5fa-33d5-4225-8d1b-2f389e2f35ea","Type":"ContainerDied","Data":"3293de38b2298e11a6c78ae45c65822bca201f0ae056c630fd023be0a71ebd09"} Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.875680 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3293de38b2298e11a6c78ae45c65822bca201f0ae056c630fd023be0a71ebd09" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.876880 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-46ff9" event={"ID":"3c8ec99a-e207-4183-b904-60921f754abf","Type":"ContainerDied","Data":"77a1da976ed2c4cb02b1842d08af2cd03e3fb0a3eca08ee9d92a5a6bb7bafdd8"} Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.876904 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a1da976ed2c4cb02b1842d08af2cd03e3fb0a3eca08ee9d92a5a6bb7bafdd8" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.876947 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-46ff9" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.892481 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b5c6" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.941756 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqkv7" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.971315 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmx2q\" (UniqueName: \"kubernetes.io/projected/d583d5fa-33d5-4225-8d1b-2f389e2f35ea-kube-api-access-pmx2q\") pod \"d583d5fa-33d5-4225-8d1b-2f389e2f35ea\" (UID: \"d583d5fa-33d5-4225-8d1b-2f389e2f35ea\") " Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.971734 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5psz2\" (UniqueName: \"kubernetes.io/projected/3c8ec99a-e207-4183-b904-60921f754abf-kube-api-access-5psz2\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.971749 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhq42\" (UniqueName: \"kubernetes.io/projected/178ad79e-55ab-4c85-958a-2b9c60c0dfa3-kube-api-access-qhq42\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:47 crc kubenswrapper[4772]: I1122 10:58:47.978118 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d583d5fa-33d5-4225-8d1b-2f389e2f35ea-kube-api-access-pmx2q" (OuterVolumeSpecName: "kube-api-access-pmx2q") pod "d583d5fa-33d5-4225-8d1b-2f389e2f35ea" (UID: "d583d5fa-33d5-4225-8d1b-2f389e2f35ea"). InnerVolumeSpecName "kube-api-access-pmx2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.072307 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhmkf\" (UniqueName: \"kubernetes.io/projected/c81b0bbf-5434-4a0d-94b1-8d2129c846b8-kube-api-access-jhmkf\") pod \"c81b0bbf-5434-4a0d-94b1-8d2129c846b8\" (UID: \"c81b0bbf-5434-4a0d-94b1-8d2129c846b8\") " Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.072853 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmx2q\" (UniqueName: \"kubernetes.io/projected/d583d5fa-33d5-4225-8d1b-2f389e2f35ea-kube-api-access-pmx2q\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.075749 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81b0bbf-5434-4a0d-94b1-8d2129c846b8-kube-api-access-jhmkf" (OuterVolumeSpecName: "kube-api-access-jhmkf") pod "c81b0bbf-5434-4a0d-94b1-8d2129c846b8" (UID: "c81b0bbf-5434-4a0d-94b1-8d2129c846b8"). InnerVolumeSpecName "kube-api-access-jhmkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.174763 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhmkf\" (UniqueName: \"kubernetes.io/projected/c81b0bbf-5434-4a0d-94b1-8d2129c846b8-kube-api-access-jhmkf\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.822566 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-267ms-config-fmvhg"] Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.829585 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-267ms-config-fmvhg"] Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.887214 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqp6h" event={"ID":"f9e8aace-e7f4-4ca7-902b-c20672965240","Type":"ContainerStarted","Data":"65405fae45c3265128be3caa906524cdbe420cae719d3d586309d72fac170304"} Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.887238 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqkv7" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.887277 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6b5c6" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.910310 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hqp6h" podStartSLOduration=16.725958215 podStartE2EDuration="22.910293912s" podCreationTimestamp="2025-11-22 10:58:26 +0000 UTC" firstStartedPulling="2025-11-22 10:58:41.529693406 +0000 UTC m=+1241.769137900" lastFinishedPulling="2025-11-22 10:58:47.714029103 +0000 UTC m=+1247.953473597" observedRunningTime="2025-11-22 10:58:48.908936927 +0000 UTC m=+1249.148381421" watchObservedRunningTime="2025-11-22 10:58:48.910293912 +0000 UTC m=+1249.149738406" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988115 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-267ms-config-zmwn4"] Nov 22 10:58:48 crc kubenswrapper[4772]: E1122 10:58:48.988540 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d583d5fa-33d5-4225-8d1b-2f389e2f35ea" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988553 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d583d5fa-33d5-4225-8d1b-2f389e2f35ea" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: E1122 10:58:48.988568 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="178ad79e-55ab-4c85-958a-2b9c60c0dfa3" containerName="ovn-config" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988574 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="178ad79e-55ab-4c85-958a-2b9c60c0dfa3" containerName="ovn-config" Nov 22 10:58:48 crc kubenswrapper[4772]: E1122 10:58:48.988588 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81b0bbf-5434-4a0d-94b1-8d2129c846b8" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988596 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81b0bbf-5434-4a0d-94b1-8d2129c846b8" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: E1122 10:58:48.988608 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8ec99a-e207-4183-b904-60921f754abf" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988615 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8ec99a-e207-4183-b904-60921f754abf" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988765 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="178ad79e-55ab-4c85-958a-2b9c60c0dfa3" containerName="ovn-config" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988777 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8ec99a-e207-4183-b904-60921f754abf" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988796 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d583d5fa-33d5-4225-8d1b-2f389e2f35ea" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.988808 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81b0bbf-5434-4a0d-94b1-8d2129c846b8" containerName="mariadb-database-create" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.989353 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:48 crc kubenswrapper[4772]: I1122 10:58:48.997968 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.055140 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-267ms-config-zmwn4"] Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.088889 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-scripts\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.088987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-log-ovn\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.089018 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8sgd\" (UniqueName: \"kubernetes.io/projected/816c2741-d7a5-428e-8539-227169d57ce0-kube-api-access-q8sgd\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.089074 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.089168 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run-ovn\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.089190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-additional-scripts\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190185 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run-ovn\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190238 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-additional-scripts\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190270 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-scripts\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-log-ovn\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190380 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8sgd\" (UniqueName: \"kubernetes.io/projected/816c2741-d7a5-428e-8539-227169d57ce0-kube-api-access-q8sgd\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190435 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190522 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-log-ovn\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190522 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.190584 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run-ovn\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.191152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-additional-scripts\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.192411 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-scripts\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.207930 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8sgd\" (UniqueName: \"kubernetes.io/projected/816c2741-d7a5-428e-8539-227169d57ce0-kube-api-access-q8sgd\") pod \"ovn-controller-267ms-config-zmwn4\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.337315 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.447641 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="178ad79e-55ab-4c85-958a-2b9c60c0dfa3" path="/var/lib/kubelet/pods/178ad79e-55ab-4c85-958a-2b9c60c0dfa3/volumes" Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.799229 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-267ms-config-zmwn4"] Nov 22 10:58:49 crc kubenswrapper[4772]: I1122 10:58:49.894974 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms-config-zmwn4" event={"ID":"816c2741-d7a5-428e-8539-227169d57ce0","Type":"ContainerStarted","Data":"238b0912bb46a8fbb8d0491dd452ab3f68ee82e2f1b09baaac5cf4216e1afae1"} Nov 22 10:58:50 crc kubenswrapper[4772]: I1122 10:58:50.903850 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms-config-zmwn4" event={"ID":"816c2741-d7a5-428e-8539-227169d57ce0","Type":"ContainerStarted","Data":"cfe11706a0850a328cd5f9163468f6f13ff412c16e00bdd5d4c3a82bca60e5d7"} Nov 22 10:58:51 crc kubenswrapper[4772]: I1122 10:58:51.919012 4772 generic.go:334] "Generic (PLEG): container finished" podID="816c2741-d7a5-428e-8539-227169d57ce0" containerID="cfe11706a0850a328cd5f9163468f6f13ff412c16e00bdd5d4c3a82bca60e5d7" exitCode=0 Nov 22 10:58:51 crc kubenswrapper[4772]: I1122 10:58:51.919105 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms-config-zmwn4" event={"ID":"816c2741-d7a5-428e-8539-227169d57ce0","Type":"ContainerDied","Data":"cfe11706a0850a328cd5f9163468f6f13ff412c16e00bdd5d4c3a82bca60e5d7"} Nov 22 10:58:52 crc kubenswrapper[4772]: I1122 10:58:52.928392 4772 generic.go:334] "Generic (PLEG): container finished" podID="f9e8aace-e7f4-4ca7-902b-c20672965240" containerID="65405fae45c3265128be3caa906524cdbe420cae719d3d586309d72fac170304" exitCode=0 Nov 22 10:58:52 crc kubenswrapper[4772]: I1122 10:58:52.928596 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqp6h" event={"ID":"f9e8aace-e7f4-4ca7-902b-c20672965240","Type":"ContainerDied","Data":"65405fae45c3265128be3caa906524cdbe420cae719d3d586309d72fac170304"} Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.223162 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356103 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run\") pod \"816c2741-d7a5-428e-8539-227169d57ce0\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356201 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8sgd\" (UniqueName: \"kubernetes.io/projected/816c2741-d7a5-428e-8539-227169d57ce0-kube-api-access-q8sgd\") pod \"816c2741-d7a5-428e-8539-227169d57ce0\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356239 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-scripts\") pod \"816c2741-d7a5-428e-8539-227169d57ce0\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356241 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run" (OuterVolumeSpecName: "var-run") pod "816c2741-d7a5-428e-8539-227169d57ce0" (UID: "816c2741-d7a5-428e-8539-227169d57ce0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356283 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-log-ovn\") pod \"816c2741-d7a5-428e-8539-227169d57ce0\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356333 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run-ovn\") pod \"816c2741-d7a5-428e-8539-227169d57ce0\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356454 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "816c2741-d7a5-428e-8539-227169d57ce0" (UID: "816c2741-d7a5-428e-8539-227169d57ce0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356504 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-additional-scripts\") pod \"816c2741-d7a5-428e-8539-227169d57ce0\" (UID: \"816c2741-d7a5-428e-8539-227169d57ce0\") " Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356546 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "816c2741-d7a5-428e-8539-227169d57ce0" (UID: "816c2741-d7a5-428e-8539-227169d57ce0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356829 4772 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356849 4772 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.356858 4772 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/816c2741-d7a5-428e-8539-227169d57ce0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.357275 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "816c2741-d7a5-428e-8539-227169d57ce0" (UID: "816c2741-d7a5-428e-8539-227169d57ce0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.357576 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-scripts" (OuterVolumeSpecName: "scripts") pod "816c2741-d7a5-428e-8539-227169d57ce0" (UID: "816c2741-d7a5-428e-8539-227169d57ce0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.361755 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816c2741-d7a5-428e-8539-227169d57ce0-kube-api-access-q8sgd" (OuterVolumeSpecName: "kube-api-access-q8sgd") pod "816c2741-d7a5-428e-8539-227169d57ce0" (UID: "816c2741-d7a5-428e-8539-227169d57ce0"). InnerVolumeSpecName "kube-api-access-q8sgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.458463 4772 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.458494 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8sgd\" (UniqueName: \"kubernetes.io/projected/816c2741-d7a5-428e-8539-227169d57ce0-kube-api-access-q8sgd\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.458505 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/816c2741-d7a5-428e-8539-227169d57ce0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.940910 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms-config-zmwn4" Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.940927 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms-config-zmwn4" event={"ID":"816c2741-d7a5-428e-8539-227169d57ce0","Type":"ContainerDied","Data":"238b0912bb46a8fbb8d0491dd452ab3f68ee82e2f1b09baaac5cf4216e1afae1"} Nov 22 10:58:53 crc kubenswrapper[4772]: I1122 10:58:53.941275 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="238b0912bb46a8fbb8d0491dd452ab3f68ee82e2f1b09baaac5cf4216e1afae1" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.216090 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.283102 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njjmm\" (UniqueName: \"kubernetes.io/projected/f9e8aace-e7f4-4ca7-902b-c20672965240-kube-api-access-njjmm\") pod \"f9e8aace-e7f4-4ca7-902b-c20672965240\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.283224 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-config-data\") pod \"f9e8aace-e7f4-4ca7-902b-c20672965240\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.283298 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-combined-ca-bundle\") pod \"f9e8aace-e7f4-4ca7-902b-c20672965240\" (UID: \"f9e8aace-e7f4-4ca7-902b-c20672965240\") " Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.289165 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e8aace-e7f4-4ca7-902b-c20672965240-kube-api-access-njjmm" (OuterVolumeSpecName: "kube-api-access-njjmm") pod "f9e8aace-e7f4-4ca7-902b-c20672965240" (UID: "f9e8aace-e7f4-4ca7-902b-c20672965240"). InnerVolumeSpecName "kube-api-access-njjmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.304107 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-267ms-config-zmwn4"] Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.323063 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-267ms-config-zmwn4"] Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.331135 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-config-data" (OuterVolumeSpecName: "config-data") pod "f9e8aace-e7f4-4ca7-902b-c20672965240" (UID: "f9e8aace-e7f4-4ca7-902b-c20672965240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.333744 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9e8aace-e7f4-4ca7-902b-c20672965240" (UID: "f9e8aace-e7f4-4ca7-902b-c20672965240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.384709 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njjmm\" (UniqueName: \"kubernetes.io/projected/f9e8aace-e7f4-4ca7-902b-c20672965240-kube-api-access-njjmm\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.384739 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.384750 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e8aace-e7f4-4ca7-902b-c20672965240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.950368 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqp6h" event={"ID":"f9e8aace-e7f4-4ca7-902b-c20672965240","Type":"ContainerDied","Data":"7a8c727b07bd7558e4b60f34689841b1702505af5e4ad47a7422b232b33e93dd"} Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.950676 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8c727b07bd7558e4b60f34689841b1702505af5e4ad47a7422b232b33e93dd" Nov 22 10:58:54 crc kubenswrapper[4772]: I1122 10:58:54.950419 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqp6h" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.195488 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hvdw9"] Nov 22 10:58:55 crc kubenswrapper[4772]: E1122 10:58:55.196204 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816c2741-d7a5-428e-8539-227169d57ce0" containerName="ovn-config" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.196220 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="816c2741-d7a5-428e-8539-227169d57ce0" containerName="ovn-config" Nov 22 10:58:55 crc kubenswrapper[4772]: E1122 10:58:55.196244 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e8aace-e7f4-4ca7-902b-c20672965240" containerName="keystone-db-sync" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.196250 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e8aace-e7f4-4ca7-902b-c20672965240" containerName="keystone-db-sync" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.196420 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e8aace-e7f4-4ca7-902b-c20672965240" containerName="keystone-db-sync" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.196431 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="816c2741-d7a5-428e-8539-227169d57ce0" containerName="ovn-config" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.197303 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.209343 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hvdw9"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.216848 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7pxjz"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.217926 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.224071 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.227280 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.227394 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.227799 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8vg6" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.265771 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7pxjz"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301268 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-combined-ca-bundle\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301327 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27gc\" (UniqueName: \"kubernetes.io/projected/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-kube-api-access-l27gc\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301352 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2w7\" (UniqueName: \"kubernetes.io/projected/b7689e48-58a0-4195-9639-4891994d4327-kube-api-access-rj2w7\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301383 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301423 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-config\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301447 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-scripts\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301477 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-credential-keys\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301514 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-config-data\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301543 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-dns-svc\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.301591 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-fernet-keys\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402704 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-combined-ca-bundle\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402790 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l27gc\" (UniqueName: \"kubernetes.io/projected/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-kube-api-access-l27gc\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402818 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2w7\" (UniqueName: \"kubernetes.io/projected/b7689e48-58a0-4195-9639-4891994d4327-kube-api-access-rj2w7\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402844 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402875 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-config\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-scripts\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402926 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-credential-keys\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402957 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-config-data\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.402989 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-dns-svc\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.403034 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-fernet-keys\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.403215 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.404340 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.405821 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-dns-svc\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.406294 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.406412 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-config\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.412673 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-scripts\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.412855 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-credential-keys\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.432126 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2w7\" (UniqueName: \"kubernetes.io/projected/b7689e48-58a0-4195-9639-4891994d4327-kube-api-access-rj2w7\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.434396 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-fernet-keys\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.435224 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-config-data\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.443155 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l27gc\" (UniqueName: \"kubernetes.io/projected/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-kube-api-access-l27gc\") pod \"dnsmasq-dns-f877ddd87-hvdw9\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.451696 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-combined-ca-bundle\") pod \"keystone-bootstrap-7pxjz\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.461429 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816c2741-d7a5-428e-8539-227169d57ce0" path="/var/lib/kubelet/pods/816c2741-d7a5-428e-8539-227169d57ce0/volumes" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.462650 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.478995 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.479162 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.483371 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.509510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rml\" (UniqueName: \"kubernetes.io/projected/d8c352bf-9815-42e1-8944-87a22e24c355-kube-api-access-25rml\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.509576 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-run-httpd\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.509607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-config-data\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.509652 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-scripts\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.509702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.509809 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-log-httpd\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.509889 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.519226 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.532720 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.540785 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.591742 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hvdw9"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.600419 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tzx4g"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.601719 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.611481 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-log-httpd\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.611534 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.611576 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rml\" (UniqueName: \"kubernetes.io/projected/d8c352bf-9815-42e1-8944-87a22e24c355-kube-api-access-25rml\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.611598 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-run-httpd\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.611612 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-config-data\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.611640 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-scripts\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.611671 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.627288 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tzx4g"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.627721 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2dm2d" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.627911 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.628557 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-log-httpd\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.628757 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-run-httpd\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.631177 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.634810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.643314 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.647893 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-scripts\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.658887 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-config-data\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.711967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rml\" (UniqueName: \"kubernetes.io/projected/d8c352bf-9815-42e1-8944-87a22e24c355-kube-api-access-25rml\") pod \"ceilometer-0\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.713267 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-scripts\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.713295 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-config-data\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.713341 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-combined-ca-bundle\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.713358 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadab934-440c-4b33-8f4a-1790b0040061-logs\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.713415 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwqw\" (UniqueName: \"kubernetes.io/projected/dadab934-440c-4b33-8f4a-1790b0040061-kube-api-access-2hwqw\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.717101 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hnd54"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.718528 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817397 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-scripts\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817505 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-config-data\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817529 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817568 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-combined-ca-bundle\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817585 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadab934-440c-4b33-8f4a-1790b0040061-logs\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817626 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9swt\" (UniqueName: \"kubernetes.io/projected/93964ee5-e17a-4689-9fb7-a68e5542b91b-kube-api-access-m9swt\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817672 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwqw\" (UniqueName: \"kubernetes.io/projected/dadab934-440c-4b33-8f4a-1790b0040061-kube-api-access-2hwqw\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.817741 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-config\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.820936 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-scripts\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.824389 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-config-data\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.852833 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadab934-440c-4b33-8f4a-1790b0040061-logs\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.853810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-combined-ca-bundle\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.927322 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hnd54"] Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.928887 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.930453 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.930503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.930547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.930568 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9swt\" (UniqueName: \"kubernetes.io/projected/93964ee5-e17a-4689-9fb7-a68e5542b91b-kube-api-access-m9swt\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.930671 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-config\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.931648 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-config\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.935068 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.935673 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.940702 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.958264 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9swt\" (UniqueName: \"kubernetes.io/projected/93964ee5-e17a-4689-9fb7-a68e5542b91b-kube-api-access-m9swt\") pod \"dnsmasq-dns-68dcc9cf6f-hnd54\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:55 crc kubenswrapper[4772]: I1122 10:58:55.958721 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwqw\" (UniqueName: \"kubernetes.io/projected/dadab934-440c-4b33-8f4a-1790b0040061-kube-api-access-2hwqw\") pod \"placement-db-sync-tzx4g\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.025956 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzx4g" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.057840 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.296370 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hvdw9"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.421356 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7pxjz"] Nov 22 10:58:56 crc kubenswrapper[4772]: W1122 10:58:56.429679 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7689e48_58a0_4195_9639_4891994d4327.slice/crio-7a520af6fd3f56b9ab19f61191230193813dad5287f99e9e3eea97471b5b3714 WatchSource:0}: Error finding container 7a520af6fd3f56b9ab19f61191230193813dad5287f99e9e3eea97471b5b3714: Status 404 returned error can't find the container with id 7a520af6fd3f56b9ab19f61191230193813dad5287f99e9e3eea97471b5b3714 Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.477960 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2d34-account-create-mmrc4"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.479069 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2d34-account-create-mmrc4" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.498414 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.512509 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2d34-account-create-mmrc4"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.523636 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.544032 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb66\" (UniqueName: \"kubernetes.io/projected/d76996e4-c174-46ae-87f6-4e7ee60f7863-kube-api-access-5kb66\") pod \"cinder-2d34-account-create-mmrc4\" (UID: \"d76996e4-c174-46ae-87f6-4e7ee60f7863\") " pod="openstack/cinder-2d34-account-create-mmrc4" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.590267 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3436-account-create-n227m"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.591651 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3436-account-create-n227m" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.593336 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.604024 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3436-account-create-n227m"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.647118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66jtn\" (UniqueName: \"kubernetes.io/projected/b1b00c76-f9b7-470b-ba50-4d88c28f02f1-kube-api-access-66jtn\") pod \"barbican-3436-account-create-n227m\" (UID: \"b1b00c76-f9b7-470b-ba50-4d88c28f02f1\") " pod="openstack/barbican-3436-account-create-n227m" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.647295 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kb66\" (UniqueName: \"kubernetes.io/projected/d76996e4-c174-46ae-87f6-4e7ee60f7863-kube-api-access-5kb66\") pod \"cinder-2d34-account-create-mmrc4\" (UID: \"d76996e4-c174-46ae-87f6-4e7ee60f7863\") " pod="openstack/cinder-2d34-account-create-mmrc4" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.671321 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kb66\" (UniqueName: \"kubernetes.io/projected/d76996e4-c174-46ae-87f6-4e7ee60f7863-kube-api-access-5kb66\") pod \"cinder-2d34-account-create-mmrc4\" (UID: \"d76996e4-c174-46ae-87f6-4e7ee60f7863\") " pod="openstack/cinder-2d34-account-create-mmrc4" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.749255 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66jtn\" (UniqueName: \"kubernetes.io/projected/b1b00c76-f9b7-470b-ba50-4d88c28f02f1-kube-api-access-66jtn\") pod \"barbican-3436-account-create-n227m\" (UID: \"b1b00c76-f9b7-470b-ba50-4d88c28f02f1\") " pod="openstack/barbican-3436-account-create-n227m" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.758293 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2d34-account-create-mmrc4" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.787140 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66jtn\" (UniqueName: \"kubernetes.io/projected/b1b00c76-f9b7-470b-ba50-4d88c28f02f1-kube-api-access-66jtn\") pod \"barbican-3436-account-create-n227m\" (UID: \"b1b00c76-f9b7-470b-ba50-4d88c28f02f1\") " pod="openstack/barbican-3436-account-create-n227m" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.801871 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3436-account-create-n227m" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.805610 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hnd54"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.878166 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-45e9-account-create-ttrmb"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.879433 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-45e9-account-create-ttrmb" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.881550 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.895455 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-45e9-account-create-ttrmb"] Nov 22 10:58:56 crc kubenswrapper[4772]: I1122 10:58:56.904398 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tzx4g"] Nov 22 10:58:56 crc kubenswrapper[4772]: W1122 10:58:56.946827 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddadab934_440c_4b33_8f4a_1790b0040061.slice/crio-2de16663dd9728838cbeaf70333cdc20a3d451befd7675a577df24a774cf024c WatchSource:0}: Error finding container 2de16663dd9728838cbeaf70333cdc20a3d451befd7675a577df24a774cf024c: Status 404 returned error can't find the container with id 2de16663dd9728838cbeaf70333cdc20a3d451befd7675a577df24a774cf024c Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:56.952893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b77s\" (UniqueName: \"kubernetes.io/projected/50cba9f3-9cc7-4000-a0f7-76159920d53a-kube-api-access-8b77s\") pod \"neutron-45e9-account-create-ttrmb\" (UID: \"50cba9f3-9cc7-4000-a0f7-76159920d53a\") " pod="openstack/neutron-45e9-account-create-ttrmb" Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:56.987403 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzx4g" event={"ID":"dadab934-440c-4b33-8f4a-1790b0040061","Type":"ContainerStarted","Data":"2de16663dd9728838cbeaf70333cdc20a3d451befd7675a577df24a774cf024c"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:56.995764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerStarted","Data":"0333cf35d04fe6b62f418d030352a396d37f05779e6891e592f68b351fbfec0c"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:56.999677 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" event={"ID":"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18","Type":"ContainerStarted","Data":"7e3d98099d5f87b8a13cc98dea1b2377112f30241e6fba3f069061d225b3484a"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:57.002422 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" event={"ID":"93964ee5-e17a-4689-9fb7-a68e5542b91b","Type":"ContainerStarted","Data":"b085709ee7dc756f01b1789a4614ae09bb0fb11d007a4941c8aa270966f0843e"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:57.004311 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pxjz" event={"ID":"b7689e48-58a0-4195-9639-4891994d4327","Type":"ContainerStarted","Data":"7a520af6fd3f56b9ab19f61191230193813dad5287f99e9e3eea97471b5b3714"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:57.054915 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b77s\" (UniqueName: \"kubernetes.io/projected/50cba9f3-9cc7-4000-a0f7-76159920d53a-kube-api-access-8b77s\") pod \"neutron-45e9-account-create-ttrmb\" (UID: \"50cba9f3-9cc7-4000-a0f7-76159920d53a\") " pod="openstack/neutron-45e9-account-create-ttrmb" Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:57.076597 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b77s\" (UniqueName: \"kubernetes.io/projected/50cba9f3-9cc7-4000-a0f7-76159920d53a-kube-api-access-8b77s\") pod \"neutron-45e9-account-create-ttrmb\" (UID: \"50cba9f3-9cc7-4000-a0f7-76159920d53a\") " pod="openstack/neutron-45e9-account-create-ttrmb" Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:57.203695 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-45e9-account-create-ttrmb" Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.013734 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mr98f" event={"ID":"1c352191-5a61-4dc6-ba16-6c82cb0fdedf","Type":"ContainerStarted","Data":"308573355d3c23e3680c3cf9e647dd107983fe8aae70c703e63d2815a1d712b3"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.015441 4772 generic.go:334] "Generic (PLEG): container finished" podID="1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" containerID="31f3066e4c03e4a20845df045ff72c7fb17618fcea911b4c8984a8f5e8f5f800" exitCode=0 Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.015509 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" event={"ID":"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18","Type":"ContainerDied","Data":"31f3066e4c03e4a20845df045ff72c7fb17618fcea911b4c8984a8f5e8f5f800"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.020616 4772 generic.go:334] "Generic (PLEG): container finished" podID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerID="b2dc2b69f3a47028d0c5e56f8fbca0692cb197031cbdc10d58da149ae86b7d1d" exitCode=0 Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.020685 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" event={"ID":"93964ee5-e17a-4689-9fb7-a68e5542b91b","Type":"ContainerDied","Data":"b2dc2b69f3a47028d0c5e56f8fbca0692cb197031cbdc10d58da149ae86b7d1d"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.040340 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mr98f" podStartSLOduration=4.226370851 podStartE2EDuration="39.040318007s" podCreationTimestamp="2025-11-22 10:58:19 +0000 UTC" firstStartedPulling="2025-11-22 10:58:20.618413533 +0000 UTC m=+1220.857858027" lastFinishedPulling="2025-11-22 10:58:55.432360689 +0000 UTC m=+1255.671805183" observedRunningTime="2025-11-22 10:58:58.035940066 +0000 UTC m=+1258.275384560" watchObservedRunningTime="2025-11-22 10:58:58.040318007 +0000 UTC m=+1258.279762501" Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.056950 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pxjz" event={"ID":"b7689e48-58a0-4195-9639-4891994d4327","Type":"ContainerStarted","Data":"eb40739d15b0abd59f1850c3167aabb90110e5977c58733f572575bdc3e9623f"} Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.816665 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7pxjz" podStartSLOduration=3.81664285 podStartE2EDuration="3.81664285s" podCreationTimestamp="2025-11-22 10:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:58:58.109127791 +0000 UTC m=+1258.348572295" watchObservedRunningTime="2025-11-22 10:58:58.81664285 +0000 UTC m=+1259.056087344" Nov 22 10:58:58 crc kubenswrapper[4772]: I1122 10:58:58.826865 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.023709 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.099597 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.101006 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-hvdw9" event={"ID":"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18","Type":"ContainerDied","Data":"7e3d98099d5f87b8a13cc98dea1b2377112f30241e6fba3f069061d225b3484a"} Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.101092 4772 scope.go:117] "RemoveContainer" containerID="31f3066e4c03e4a20845df045ff72c7fb17618fcea911b4c8984a8f5e8f5f800" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.113411 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l27gc\" (UniqueName: \"kubernetes.io/projected/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-kube-api-access-l27gc\") pod \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.113467 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-nb\") pod \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.113504 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-sb\") pod \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.113594 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-dns-svc\") pod \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.113611 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-config\") pod \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\" (UID: \"1a6bea34-a591-4e26-8ed1-cc6d65b3aa18\") " Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.127349 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-45e9-account-create-ttrmb"] Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.127653 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" event={"ID":"93964ee5-e17a-4689-9fb7-a68e5542b91b","Type":"ContainerStarted","Data":"5b382f99d6a32e22ab31c0364ec79664de10152f7be9b901a5494a01633fa59c"} Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.127691 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.138456 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3436-account-create-n227m"] Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.140981 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-kube-api-access-l27gc" (OuterVolumeSpecName: "kube-api-access-l27gc") pod "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" (UID: "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18"). InnerVolumeSpecName "kube-api-access-l27gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.153203 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" (UID: "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.163601 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-config" (OuterVolumeSpecName: "config") pod "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" (UID: "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.177864 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2d34-account-create-mmrc4"] Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.180612 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" podStartSLOduration=4.180596029 podStartE2EDuration="4.180596029s" podCreationTimestamp="2025-11-22 10:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:58:59.160984199 +0000 UTC m=+1259.400428693" watchObservedRunningTime="2025-11-22 10:58:59.180596029 +0000 UTC m=+1259.420040523" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.193502 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" (UID: "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.219250 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" (UID: "1a6bea34-a591-4e26-8ed1-cc6d65b3aa18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.221102 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.221157 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.221171 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l27gc\" (UniqueName: \"kubernetes.io/projected/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-kube-api-access-l27gc\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.221184 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:59 crc kubenswrapper[4772]: W1122 10:58:59.229698 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd76996e4_c174_46ae_87f6_4e7ee60f7863.slice/crio-c47dd955cefca574b1b243da804a8b5fc2280359e15dd34e85bb29fa8288d8ec WatchSource:0}: Error finding container c47dd955cefca574b1b243da804a8b5fc2280359e15dd34e85bb29fa8288d8ec: Status 404 returned error can't find the container with id c47dd955cefca574b1b243da804a8b5fc2280359e15dd34e85bb29fa8288d8ec Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.322712 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.466621 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hvdw9"] Nov 22 10:58:59 crc kubenswrapper[4772]: I1122 10:58:59.466771 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hvdw9"] Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.156768 4772 generic.go:334] "Generic (PLEG): container finished" podID="50cba9f3-9cc7-4000-a0f7-76159920d53a" containerID="c1d39690183ac7da3f6bb73a2b6023178bcc32f7a86f717e2c779a9bc9ab7f54" exitCode=0 Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.156879 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-45e9-account-create-ttrmb" event={"ID":"50cba9f3-9cc7-4000-a0f7-76159920d53a","Type":"ContainerDied","Data":"c1d39690183ac7da3f6bb73a2b6023178bcc32f7a86f717e2c779a9bc9ab7f54"} Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.157170 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-45e9-account-create-ttrmb" event={"ID":"50cba9f3-9cc7-4000-a0f7-76159920d53a","Type":"ContainerStarted","Data":"9dd4db3d13e9ca36cd55bc6dce850469dbda8dcb0118e39bc8d8d4b8db70d895"} Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.163062 4772 generic.go:334] "Generic (PLEG): container finished" podID="b1b00c76-f9b7-470b-ba50-4d88c28f02f1" containerID="3c4482e8b22e1ee45e4019fe7ed707b6cf62746e3b8a3874e155bfa807d1bfbe" exitCode=0 Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.163156 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3436-account-create-n227m" event={"ID":"b1b00c76-f9b7-470b-ba50-4d88c28f02f1","Type":"ContainerDied","Data":"3c4482e8b22e1ee45e4019fe7ed707b6cf62746e3b8a3874e155bfa807d1bfbe"} Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.163238 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3436-account-create-n227m" event={"ID":"b1b00c76-f9b7-470b-ba50-4d88c28f02f1","Type":"ContainerStarted","Data":"18af8463f933e59a2eb09e7a46aaf9d0527990e417b1e148490f865eae12aa85"} Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.171247 4772 generic.go:334] "Generic (PLEG): container finished" podID="d76996e4-c174-46ae-87f6-4e7ee60f7863" containerID="ddeb1d6df01dfd0c17fe899a1819de93c2d30aa66d19616fedb322be0b74e60d" exitCode=0 Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.171397 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2d34-account-create-mmrc4" event={"ID":"d76996e4-c174-46ae-87f6-4e7ee60f7863","Type":"ContainerDied","Data":"ddeb1d6df01dfd0c17fe899a1819de93c2d30aa66d19616fedb322be0b74e60d"} Nov 22 10:59:00 crc kubenswrapper[4772]: I1122 10:59:00.171443 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2d34-account-create-mmrc4" event={"ID":"d76996e4-c174-46ae-87f6-4e7ee60f7863","Type":"ContainerStarted","Data":"c47dd955cefca574b1b243da804a8b5fc2280359e15dd34e85bb29fa8288d8ec"} Nov 22 10:59:01 crc kubenswrapper[4772]: I1122 10:59:01.439651 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" path="/var/lib/kubelet/pods/1a6bea34-a591-4e26-8ed1-cc6d65b3aa18/volumes" Nov 22 10:59:02 crc kubenswrapper[4772]: I1122 10:59:02.007231 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-45e9-account-create-ttrmb" Nov 22 10:59:02 crc kubenswrapper[4772]: I1122 10:59:02.078711 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b77s\" (UniqueName: \"kubernetes.io/projected/50cba9f3-9cc7-4000-a0f7-76159920d53a-kube-api-access-8b77s\") pod \"50cba9f3-9cc7-4000-a0f7-76159920d53a\" (UID: \"50cba9f3-9cc7-4000-a0f7-76159920d53a\") " Nov 22 10:59:02 crc kubenswrapper[4772]: I1122 10:59:02.085357 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cba9f3-9cc7-4000-a0f7-76159920d53a-kube-api-access-8b77s" (OuterVolumeSpecName: "kube-api-access-8b77s") pod "50cba9f3-9cc7-4000-a0f7-76159920d53a" (UID: "50cba9f3-9cc7-4000-a0f7-76159920d53a"). InnerVolumeSpecName "kube-api-access-8b77s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:02 crc kubenswrapper[4772]: I1122 10:59:02.181184 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b77s\" (UniqueName: \"kubernetes.io/projected/50cba9f3-9cc7-4000-a0f7-76159920d53a-kube-api-access-8b77s\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:02 crc kubenswrapper[4772]: I1122 10:59:02.187604 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-45e9-account-create-ttrmb" event={"ID":"50cba9f3-9cc7-4000-a0f7-76159920d53a","Type":"ContainerDied","Data":"9dd4db3d13e9ca36cd55bc6dce850469dbda8dcb0118e39bc8d8d4b8db70d895"} Nov 22 10:59:02 crc kubenswrapper[4772]: I1122 10:59:02.187642 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dd4db3d13e9ca36cd55bc6dce850469dbda8dcb0118e39bc8d8d4b8db70d895" Nov 22 10:59:02 crc kubenswrapper[4772]: I1122 10:59:02.187647 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-45e9-account-create-ttrmb" Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.198701 4772 generic.go:334] "Generic (PLEG): container finished" podID="b7689e48-58a0-4195-9639-4891994d4327" containerID="eb40739d15b0abd59f1850c3167aabb90110e5977c58733f572575bdc3e9623f" exitCode=0 Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.198794 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pxjz" event={"ID":"b7689e48-58a0-4195-9639-4891994d4327","Type":"ContainerDied","Data":"eb40739d15b0abd59f1850c3167aabb90110e5977c58733f572575bdc3e9623f"} Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.773725 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3436-account-create-n227m" Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.789403 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2d34-account-create-mmrc4" Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.946967 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66jtn\" (UniqueName: \"kubernetes.io/projected/b1b00c76-f9b7-470b-ba50-4d88c28f02f1-kube-api-access-66jtn\") pod \"b1b00c76-f9b7-470b-ba50-4d88c28f02f1\" (UID: \"b1b00c76-f9b7-470b-ba50-4d88c28f02f1\") " Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.947242 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb66\" (UniqueName: \"kubernetes.io/projected/d76996e4-c174-46ae-87f6-4e7ee60f7863-kube-api-access-5kb66\") pod \"d76996e4-c174-46ae-87f6-4e7ee60f7863\" (UID: \"d76996e4-c174-46ae-87f6-4e7ee60f7863\") " Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.952563 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b00c76-f9b7-470b-ba50-4d88c28f02f1-kube-api-access-66jtn" (OuterVolumeSpecName: "kube-api-access-66jtn") pod "b1b00c76-f9b7-470b-ba50-4d88c28f02f1" (UID: "b1b00c76-f9b7-470b-ba50-4d88c28f02f1"). InnerVolumeSpecName "kube-api-access-66jtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:03 crc kubenswrapper[4772]: I1122 10:59:03.953054 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76996e4-c174-46ae-87f6-4e7ee60f7863-kube-api-access-5kb66" (OuterVolumeSpecName: "kube-api-access-5kb66") pod "d76996e4-c174-46ae-87f6-4e7ee60f7863" (UID: "d76996e4-c174-46ae-87f6-4e7ee60f7863"). InnerVolumeSpecName "kube-api-access-5kb66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.049861 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66jtn\" (UniqueName: \"kubernetes.io/projected/b1b00c76-f9b7-470b-ba50-4d88c28f02f1-kube-api-access-66jtn\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.050159 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kb66\" (UniqueName: \"kubernetes.io/projected/d76996e4-c174-46ae-87f6-4e7ee60f7863-kube-api-access-5kb66\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.239362 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3436-account-create-n227m" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.239410 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3436-account-create-n227m" event={"ID":"b1b00c76-f9b7-470b-ba50-4d88c28f02f1","Type":"ContainerDied","Data":"18af8463f933e59a2eb09e7a46aaf9d0527990e417b1e148490f865eae12aa85"} Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.239462 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18af8463f933e59a2eb09e7a46aaf9d0527990e417b1e148490f865eae12aa85" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.246511 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"20bfb81f40cbc7f6e93279c153402c5ff3a9a24099eb68f5d5aa22251bdfd63e"} Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.246556 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"2aa38c5be9a613db8f9237edd2803eb9c3a6730dcaecfe363e7fdf603819c43c"} Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.248430 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2d34-account-create-mmrc4" event={"ID":"d76996e4-c174-46ae-87f6-4e7ee60f7863","Type":"ContainerDied","Data":"c47dd955cefca574b1b243da804a8b5fc2280359e15dd34e85bb29fa8288d8ec"} Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.248464 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47dd955cefca574b1b243da804a8b5fc2280359e15dd34e85bb29fa8288d8ec" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.248440 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2d34-account-create-mmrc4" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.263662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzx4g" event={"ID":"dadab934-440c-4b33-8f4a-1790b0040061","Type":"ContainerStarted","Data":"07c2563361cad796b9ee4cc769b13a47e6b055bb75f24278b879c0e2e480714e"} Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.266422 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerStarted","Data":"48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5"} Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.285342 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tzx4g" podStartSLOduration=2.454606006 podStartE2EDuration="9.285325431s" podCreationTimestamp="2025-11-22 10:58:55 +0000 UTC" firstStartedPulling="2025-11-22 10:58:56.948896743 +0000 UTC m=+1257.188341237" lastFinishedPulling="2025-11-22 10:59:03.779616168 +0000 UTC m=+1264.019060662" observedRunningTime="2025-11-22 10:59:04.278673221 +0000 UTC m=+1264.518117715" watchObservedRunningTime="2025-11-22 10:59:04.285325431 +0000 UTC m=+1264.524769915" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.607993 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.766749 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2w7\" (UniqueName: \"kubernetes.io/projected/b7689e48-58a0-4195-9639-4891994d4327-kube-api-access-rj2w7\") pod \"b7689e48-58a0-4195-9639-4891994d4327\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.766890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-scripts\") pod \"b7689e48-58a0-4195-9639-4891994d4327\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.766940 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-combined-ca-bundle\") pod \"b7689e48-58a0-4195-9639-4891994d4327\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.767116 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-fernet-keys\") pod \"b7689e48-58a0-4195-9639-4891994d4327\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.767160 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-config-data\") pod \"b7689e48-58a0-4195-9639-4891994d4327\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.767219 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-credential-keys\") pod \"b7689e48-58a0-4195-9639-4891994d4327\" (UID: \"b7689e48-58a0-4195-9639-4891994d4327\") " Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.772180 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7689e48-58a0-4195-9639-4891994d4327-kube-api-access-rj2w7" (OuterVolumeSpecName: "kube-api-access-rj2w7") pod "b7689e48-58a0-4195-9639-4891994d4327" (UID: "b7689e48-58a0-4195-9639-4891994d4327"). InnerVolumeSpecName "kube-api-access-rj2w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.774799 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-scripts" (OuterVolumeSpecName: "scripts") pod "b7689e48-58a0-4195-9639-4891994d4327" (UID: "b7689e48-58a0-4195-9639-4891994d4327"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.777227 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b7689e48-58a0-4195-9639-4891994d4327" (UID: "b7689e48-58a0-4195-9639-4891994d4327"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.787906 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b7689e48-58a0-4195-9639-4891994d4327" (UID: "b7689e48-58a0-4195-9639-4891994d4327"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.797135 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-config-data" (OuterVolumeSpecName: "config-data") pod "b7689e48-58a0-4195-9639-4891994d4327" (UID: "b7689e48-58a0-4195-9639-4891994d4327"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.798339 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7689e48-58a0-4195-9639-4891994d4327" (UID: "b7689e48-58a0-4195-9639-4891994d4327"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.870211 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2w7\" (UniqueName: \"kubernetes.io/projected/b7689e48-58a0-4195-9639-4891994d4327-kube-api-access-rj2w7\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.870507 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.870520 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.870529 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.870539 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:04 crc kubenswrapper[4772]: I1122 10:59:04.870547 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b7689e48-58a0-4195-9639-4891994d4327-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.300076 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7pxjz"] Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.300302 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"d6a9ec495836f1834c42245ba492aa4e9ebb76dac50305dd8790c68f739b9277"} Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.300333 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"bbe601a3871553a3d7d70b6b470ceebc522ddd2ed4d9823f1a644a888ecc03ff"} Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.300346 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"5b75dbfffe8e1c9d76456ff93f1c9c0f4bf16b63767e3e50ca83e8220c802021"} Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.303381 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7pxjz" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.305200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7pxjz" event={"ID":"b7689e48-58a0-4195-9639-4891994d4327","Type":"ContainerDied","Data":"7a520af6fd3f56b9ab19f61191230193813dad5287f99e9e3eea97471b5b3714"} Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.305255 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a520af6fd3f56b9ab19f61191230193813dad5287f99e9e3eea97471b5b3714" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.309246 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7pxjz"] Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.388541 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xnbhf"] Nov 22 10:59:05 crc kubenswrapper[4772]: E1122 10:59:05.388930 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b00c76-f9b7-470b-ba50-4d88c28f02f1" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.388944 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b00c76-f9b7-470b-ba50-4d88c28f02f1" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: E1122 10:59:05.388957 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76996e4-c174-46ae-87f6-4e7ee60f7863" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.388965 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76996e4-c174-46ae-87f6-4e7ee60f7863" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: E1122 10:59:05.388997 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7689e48-58a0-4195-9639-4891994d4327" containerName="keystone-bootstrap" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389006 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7689e48-58a0-4195-9639-4891994d4327" containerName="keystone-bootstrap" Nov 22 10:59:05 crc kubenswrapper[4772]: E1122 10:59:05.389020 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cba9f3-9cc7-4000-a0f7-76159920d53a" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389029 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cba9f3-9cc7-4000-a0f7-76159920d53a" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: E1122 10:59:05.389062 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" containerName="init" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389070 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" containerName="init" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389243 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cba9f3-9cc7-4000-a0f7-76159920d53a" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389259 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76996e4-c174-46ae-87f6-4e7ee60f7863" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389273 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b00c76-f9b7-470b-ba50-4d88c28f02f1" containerName="mariadb-account-create" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389281 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7689e48-58a0-4195-9639-4891994d4327" containerName="keystone-bootstrap" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389295 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6bea34-a591-4e26-8ed1-cc6d65b3aa18" containerName="init" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.389851 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.392877 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.392919 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8vg6" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.393346 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.393364 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.402697 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xnbhf"] Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.424641 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7689e48-58a0-4195-9639-4891994d4327" path="/var/lib/kubelet/pods/b7689e48-58a0-4195-9639-4891994d4327/volumes" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.582090 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xkwl\" (UniqueName: \"kubernetes.io/projected/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-kube-api-access-9xkwl\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.582151 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-fernet-keys\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.582188 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-scripts\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.582533 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-config-data\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.582614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-combined-ca-bundle\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.582733 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-credential-keys\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.683762 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xkwl\" (UniqueName: \"kubernetes.io/projected/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-kube-api-access-9xkwl\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.683830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-fernet-keys\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.683860 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-scripts\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.683912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-config-data\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.683944 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-combined-ca-bundle\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.684002 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-credential-keys\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.690176 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-config-data\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.690208 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-combined-ca-bundle\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.690176 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-credential-keys\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.690628 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-fernet-keys\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.692773 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-scripts\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.700826 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xkwl\" (UniqueName: \"kubernetes.io/projected/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-kube-api-access-9xkwl\") pod \"keystone-bootstrap-xnbhf\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:05 crc kubenswrapper[4772]: I1122 10:59:05.707913 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.059758 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.124169 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-clgxb"] Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.125962 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-clgxb" podUID="1909bd69-e033-40b5-90f0-03e4450145fb" containerName="dnsmasq-dns" containerID="cri-o://456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c" gracePeriod=10 Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.660921 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6mvdg"] Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.664002 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.668375 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6rrs9" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.672011 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.672490 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.677707 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6mvdg"] Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.716672 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-combined-ca-bundle\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.716771 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-config-data\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.716805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb3b51a-da06-4a18-bc47-225aa06fff04-etc-machine-id\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.716855 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-db-sync-config-data\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.716880 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-scripts\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.716903 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktqf\" (UniqueName: \"kubernetes.io/projected/dfb3b51a-da06-4a18-bc47-225aa06fff04-kube-api-access-rktqf\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.777102 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xnbhf"] Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.820131 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-config-data\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.820179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb3b51a-da06-4a18-bc47-225aa06fff04-etc-machine-id\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.820226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-db-sync-config-data\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.820243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-scripts\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.820267 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rktqf\" (UniqueName: \"kubernetes.io/projected/dfb3b51a-da06-4a18-bc47-225aa06fff04-kube-api-access-rktqf\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.820322 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-combined-ca-bundle\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.820374 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb3b51a-da06-4a18-bc47-225aa06fff04-etc-machine-id\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.825690 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-db-sync-config-data\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.829921 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-config-data\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.832788 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-scripts\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.833426 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-combined-ca-bundle\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.843978 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rktqf\" (UniqueName: \"kubernetes.io/projected/dfb3b51a-da06-4a18-bc47-225aa06fff04-kube-api-access-rktqf\") pod \"cinder-db-sync-6mvdg\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.896612 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6tjbt"] Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.897769 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.902567 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pxhcj" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.902831 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 10:59:06 crc kubenswrapper[4772]: I1122 10:59:06.904682 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6tjbt"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.047652 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.053292 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-db-sync-config-data\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.053358 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-combined-ca-bundle\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.053403 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgld\" (UniqueName: \"kubernetes.io/projected/146226df-eb2f-4fd3-a175-bccca5de564e-kube-api-access-4bgld\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.137328 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9mzxx"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.140677 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.143843 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.144092 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fldv2" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.145484 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.150605 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9mzxx"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.155739 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-db-sync-config-data\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.155795 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549qt\" (UniqueName: \"kubernetes.io/projected/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-kube-api-access-549qt\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.155827 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-combined-ca-bundle\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.155854 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgld\" (UniqueName: \"kubernetes.io/projected/146226df-eb2f-4fd3-a175-bccca5de564e-kube-api-access-4bgld\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.155880 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-combined-ca-bundle\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.155900 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-config\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.161622 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-combined-ca-bundle\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.162856 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-db-sync-config-data\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.181395 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgld\" (UniqueName: \"kubernetes.io/projected/146226df-eb2f-4fd3-a175-bccca5de564e-kube-api-access-4bgld\") pod \"barbican-db-sync-6tjbt\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.257563 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-combined-ca-bundle\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.257609 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-config\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.257739 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549qt\" (UniqueName: \"kubernetes.io/projected/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-kube-api-access-549qt\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.266685 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-combined-ca-bundle\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.268643 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-config\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.275448 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-549qt\" (UniqueName: \"kubernetes.io/projected/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-kube-api-access-549qt\") pod \"neutron-db-sync-9mzxx\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.276238 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.333452 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.347679 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerStarted","Data":"aa61d5f2be67c3162272b709160c16d2b8cb7b6652be46a7ec677336065aa1ac"} Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.362797 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xnbhf" event={"ID":"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87","Type":"ContainerStarted","Data":"dcc33aafdc47ca5ac2ef453217b69b646363b496ad0166daca19a981bf6f1d77"} Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.366724 4772 generic.go:334] "Generic (PLEG): container finished" podID="1909bd69-e033-40b5-90f0-03e4450145fb" containerID="456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c" exitCode=0 Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.366765 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-clgxb" event={"ID":"1909bd69-e033-40b5-90f0-03e4450145fb","Type":"ContainerDied","Data":"456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c"} Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.366793 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-clgxb" event={"ID":"1909bd69-e033-40b5-90f0-03e4450145fb","Type":"ContainerDied","Data":"b609d3b257083b7ee74f66c25df81bac338c2653548e1212f20decf48ac2d433"} Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.366809 4772 scope.go:117] "RemoveContainer" containerID="456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.366926 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-clgxb" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.394118 4772 scope.go:117] "RemoveContainer" containerID="c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.434393 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.411832514 podStartE2EDuration="1m7.434367674s" podCreationTimestamp="2025-11-22 10:58:00 +0000 UTC" firstStartedPulling="2025-11-22 10:58:18.791233399 +0000 UTC m=+1219.030677893" lastFinishedPulling="2025-11-22 10:59:03.813768559 +0000 UTC m=+1264.053213053" observedRunningTime="2025-11-22 10:59:07.405978331 +0000 UTC m=+1267.645422825" watchObservedRunningTime="2025-11-22 10:59:07.434367674 +0000 UTC m=+1267.673812178" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.463909 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-sb\") pod \"1909bd69-e033-40b5-90f0-03e4450145fb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.464014 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-config\") pod \"1909bd69-e033-40b5-90f0-03e4450145fb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.464080 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-dns-svc\") pod \"1909bd69-e033-40b5-90f0-03e4450145fb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.464205 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-nb\") pod \"1909bd69-e033-40b5-90f0-03e4450145fb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.464252 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqltv\" (UniqueName: \"kubernetes.io/projected/1909bd69-e033-40b5-90f0-03e4450145fb-kube-api-access-kqltv\") pod \"1909bd69-e033-40b5-90f0-03e4450145fb\" (UID: \"1909bd69-e033-40b5-90f0-03e4450145fb\") " Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.469436 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mzxx" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.470069 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1909bd69-e033-40b5-90f0-03e4450145fb-kube-api-access-kqltv" (OuterVolumeSpecName: "kube-api-access-kqltv") pod "1909bd69-e033-40b5-90f0-03e4450145fb" (UID: "1909bd69-e033-40b5-90f0-03e4450145fb"). InnerVolumeSpecName "kube-api-access-kqltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.472316 4772 scope.go:117] "RemoveContainer" containerID="456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c" Nov 22 10:59:07 crc kubenswrapper[4772]: E1122 10:59:07.474237 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c\": container with ID starting with 456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c not found: ID does not exist" containerID="456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.474297 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c"} err="failed to get container status \"456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c\": rpc error: code = NotFound desc = could not find container \"456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c\": container with ID starting with 456e29bca2f06f23d536a9e206d271ef6b8ca54495f688e208283901957f924c not found: ID does not exist" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.474330 4772 scope.go:117] "RemoveContainer" containerID="c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd" Nov 22 10:59:07 crc kubenswrapper[4772]: E1122 10:59:07.474626 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd\": container with ID starting with c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd not found: ID does not exist" containerID="c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.474656 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd"} err="failed to get container status \"c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd\": rpc error: code = NotFound desc = could not find container \"c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd\": container with ID starting with c1b7b816b6b055180114c0036a913f70778ce3c3b898e951b91bc80d451cbbbd not found: ID does not exist" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.546851 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1909bd69-e033-40b5-90f0-03e4450145fb" (UID: "1909bd69-e033-40b5-90f0-03e4450145fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.551416 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6mvdg"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.554856 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1909bd69-e033-40b5-90f0-03e4450145fb" (UID: "1909bd69-e033-40b5-90f0-03e4450145fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.565085 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1909bd69-e033-40b5-90f0-03e4450145fb" (UID: "1909bd69-e033-40b5-90f0-03e4450145fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.566449 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.566477 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.566492 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.566505 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqltv\" (UniqueName: \"kubernetes.io/projected/1909bd69-e033-40b5-90f0-03e4450145fb-kube-api-access-kqltv\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.573855 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-config" (OuterVolumeSpecName: "config") pod "1909bd69-e033-40b5-90f0-03e4450145fb" (UID: "1909bd69-e033-40b5-90f0-03e4450145fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.668624 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1909bd69-e033-40b5-90f0-03e4450145fb-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.732029 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-w4qcg"] Nov 22 10:59:07 crc kubenswrapper[4772]: E1122 10:59:07.732525 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1909bd69-e033-40b5-90f0-03e4450145fb" containerName="init" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.732538 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1909bd69-e033-40b5-90f0-03e4450145fb" containerName="init" Nov 22 10:59:07 crc kubenswrapper[4772]: E1122 10:59:07.732562 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1909bd69-e033-40b5-90f0-03e4450145fb" containerName="dnsmasq-dns" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.732568 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1909bd69-e033-40b5-90f0-03e4450145fb" containerName="dnsmasq-dns" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.732842 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1909bd69-e033-40b5-90f0-03e4450145fb" containerName="dnsmasq-dns" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.733947 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.738375 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.755123 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-w4qcg"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.822690 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-clgxb"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.829527 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-clgxb"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.836003 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6tjbt"] Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.871818 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.872171 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.872360 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6cq\" (UniqueName: \"kubernetes.io/projected/16102b8a-b1aa-43a6-b372-a042584f7279-kube-api-access-8h6cq\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.872464 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.872644 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.872816 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-config\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.975168 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.975318 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-config\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.975347 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.975397 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.975443 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6cq\" (UniqueName: \"kubernetes.io/projected/16102b8a-b1aa-43a6-b372-a042584f7279-kube-api-access-8h6cq\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.975488 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.976716 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.977031 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.977672 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-config\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.978696 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.978760 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:07 crc kubenswrapper[4772]: I1122 10:59:07.995908 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6cq\" (UniqueName: \"kubernetes.io/projected/16102b8a-b1aa-43a6-b372-a042584f7279-kube-api-access-8h6cq\") pod \"dnsmasq-dns-76fcf4b695-w4qcg\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:08 crc kubenswrapper[4772]: I1122 10:59:08.018410 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9mzxx"] Nov 22 10:59:08 crc kubenswrapper[4772]: I1122 10:59:08.117918 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:08 crc kubenswrapper[4772]: I1122 10:59:08.377909 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6mvdg" event={"ID":"dfb3b51a-da06-4a18-bc47-225aa06fff04","Type":"ContainerStarted","Data":"1e39e5a74a67af0aa3470c7d3736b924d127e6de8b05e3f56afd40c0a34bd6ad"} Nov 22 10:59:08 crc kubenswrapper[4772]: I1122 10:59:08.380232 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xnbhf" event={"ID":"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87","Type":"ContainerStarted","Data":"f465d380c2327b6cd250787f627fb5fa4acf31137a261406640758672be2bd67"} Nov 22 10:59:08 crc kubenswrapper[4772]: I1122 10:59:08.400382 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xnbhf" podStartSLOduration=3.400362111 podStartE2EDuration="3.400362111s" podCreationTimestamp="2025-11-22 10:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:08.39599593 +0000 UTC m=+1268.635440424" watchObservedRunningTime="2025-11-22 10:59:08.400362111 +0000 UTC m=+1268.639806615" Nov 22 10:59:09 crc kubenswrapper[4772]: W1122 10:59:09.036895 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb489d0e_dc04_4a25_8e89_ec9ede81a3cb.slice/crio-9ef28084ddd0cc402021285ea95a9abfc3c2e2f5fddc3cf21e5dba947e11dce2 WatchSource:0}: Error finding container 9ef28084ddd0cc402021285ea95a9abfc3c2e2f5fddc3cf21e5dba947e11dce2: Status 404 returned error can't find the container with id 9ef28084ddd0cc402021285ea95a9abfc3c2e2f5fddc3cf21e5dba947e11dce2 Nov 22 10:59:09 crc kubenswrapper[4772]: I1122 10:59:09.393809 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tjbt" event={"ID":"146226df-eb2f-4fd3-a175-bccca5de564e","Type":"ContainerStarted","Data":"40ad5979f2bf9848781e4440850db1580934fc17422d9d8969a86d0e8d907af7"} Nov 22 10:59:09 crc kubenswrapper[4772]: I1122 10:59:09.396163 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mzxx" event={"ID":"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb","Type":"ContainerStarted","Data":"9ef28084ddd0cc402021285ea95a9abfc3c2e2f5fddc3cf21e5dba947e11dce2"} Nov 22 10:59:09 crc kubenswrapper[4772]: I1122 10:59:09.431916 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1909bd69-e033-40b5-90f0-03e4450145fb" path="/var/lib/kubelet/pods/1909bd69-e033-40b5-90f0-03e4450145fb/volumes" Nov 22 10:59:09 crc kubenswrapper[4772]: I1122 10:59:09.547750 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-w4qcg"] Nov 22 10:59:09 crc kubenswrapper[4772]: W1122 10:59:09.554681 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16102b8a_b1aa_43a6_b372_a042584f7279.slice/crio-e396f396008ce3f5c784bc5b9c823f8d8217dd60a807562eed62b53d9927a629 WatchSource:0}: Error finding container e396f396008ce3f5c784bc5b9c823f8d8217dd60a807562eed62b53d9927a629: Status 404 returned error can't find the container with id e396f396008ce3f5c784bc5b9c823f8d8217dd60a807562eed62b53d9927a629 Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.414657 4772 generic.go:334] "Generic (PLEG): container finished" podID="dadab934-440c-4b33-8f4a-1790b0040061" containerID="07c2563361cad796b9ee4cc769b13a47e6b055bb75f24278b879c0e2e480714e" exitCode=0 Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.414748 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzx4g" event={"ID":"dadab934-440c-4b33-8f4a-1790b0040061","Type":"ContainerDied","Data":"07c2563361cad796b9ee4cc769b13a47e6b055bb75f24278b879c0e2e480714e"} Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.417959 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerStarted","Data":"53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66"} Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.423705 4772 generic.go:334] "Generic (PLEG): container finished" podID="16102b8a-b1aa-43a6-b372-a042584f7279" containerID="eb722a4822d7b2d62af9307137dd9b1084a78f49a081ec79fae0e31daf9e7a93" exitCode=0 Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.423979 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" event={"ID":"16102b8a-b1aa-43a6-b372-a042584f7279","Type":"ContainerDied","Data":"eb722a4822d7b2d62af9307137dd9b1084a78f49a081ec79fae0e31daf9e7a93"} Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.424096 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" event={"ID":"16102b8a-b1aa-43a6-b372-a042584f7279","Type":"ContainerStarted","Data":"e396f396008ce3f5c784bc5b9c823f8d8217dd60a807562eed62b53d9927a629"} Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.437780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mzxx" event={"ID":"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb","Type":"ContainerStarted","Data":"22e2f7cca48f17c69e56813ca2b405c534cda48058fd9a577adcf8d1882eb7e1"} Nov 22 10:59:10 crc kubenswrapper[4772]: I1122 10:59:10.481895 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9mzxx" podStartSLOduration=3.4818715989999998 podStartE2EDuration="3.481871599s" podCreationTimestamp="2025-11-22 10:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:10.473461035 +0000 UTC m=+1270.712905529" watchObservedRunningTime="2025-11-22 10:59:10.481871599 +0000 UTC m=+1270.721316093" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.461359 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" event={"ID":"16102b8a-b1aa-43a6-b372-a042584f7279","Type":"ContainerStarted","Data":"419852505e37afcd042d6210f307969de385e47d7f236ec304780e99ef2ce87e"} Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.463260 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.486295 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" podStartSLOduration=4.486275656 podStartE2EDuration="4.486275656s" podCreationTimestamp="2025-11-22 10:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:11.483815343 +0000 UTC m=+1271.723259847" watchObservedRunningTime="2025-11-22 10:59:11.486275656 +0000 UTC m=+1271.725720150" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.818543 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzx4g" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.961933 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-combined-ca-bundle\") pod \"dadab934-440c-4b33-8f4a-1790b0040061\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.962077 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwqw\" (UniqueName: \"kubernetes.io/projected/dadab934-440c-4b33-8f4a-1790b0040061-kube-api-access-2hwqw\") pod \"dadab934-440c-4b33-8f4a-1790b0040061\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.962128 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadab934-440c-4b33-8f4a-1790b0040061-logs\") pod \"dadab934-440c-4b33-8f4a-1790b0040061\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.962157 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-scripts\") pod \"dadab934-440c-4b33-8f4a-1790b0040061\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.962197 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-config-data\") pod \"dadab934-440c-4b33-8f4a-1790b0040061\" (UID: \"dadab934-440c-4b33-8f4a-1790b0040061\") " Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.963423 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dadab934-440c-4b33-8f4a-1790b0040061-logs" (OuterVolumeSpecName: "logs") pod "dadab934-440c-4b33-8f4a-1790b0040061" (UID: "dadab934-440c-4b33-8f4a-1790b0040061"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.967593 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dadab934-440c-4b33-8f4a-1790b0040061-kube-api-access-2hwqw" (OuterVolumeSpecName: "kube-api-access-2hwqw") pod "dadab934-440c-4b33-8f4a-1790b0040061" (UID: "dadab934-440c-4b33-8f4a-1790b0040061"). InnerVolumeSpecName "kube-api-access-2hwqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.967692 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-scripts" (OuterVolumeSpecName: "scripts") pod "dadab934-440c-4b33-8f4a-1790b0040061" (UID: "dadab934-440c-4b33-8f4a-1790b0040061"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.985779 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-config-data" (OuterVolumeSpecName: "config-data") pod "dadab934-440c-4b33-8f4a-1790b0040061" (UID: "dadab934-440c-4b33-8f4a-1790b0040061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:11 crc kubenswrapper[4772]: I1122 10:59:11.987426 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dadab934-440c-4b33-8f4a-1790b0040061" (UID: "dadab934-440c-4b33-8f4a-1790b0040061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.063616 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.063646 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.063661 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hwqw\" (UniqueName: \"kubernetes.io/projected/dadab934-440c-4b33-8f4a-1790b0040061-kube-api-access-2hwqw\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.063673 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadab934-440c-4b33-8f4a-1790b0040061-logs\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.063684 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dadab934-440c-4b33-8f4a-1790b0040061-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.473704 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tzx4g" event={"ID":"dadab934-440c-4b33-8f4a-1790b0040061","Type":"ContainerDied","Data":"2de16663dd9728838cbeaf70333cdc20a3d451befd7675a577df24a774cf024c"} Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.473746 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de16663dd9728838cbeaf70333cdc20a3d451befd7675a577df24a774cf024c" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.473713 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tzx4g" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.529963 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5cd786c776-rmj8k"] Nov 22 10:59:12 crc kubenswrapper[4772]: E1122 10:59:12.530417 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadab934-440c-4b33-8f4a-1790b0040061" containerName="placement-db-sync" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.530432 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadab934-440c-4b33-8f4a-1790b0040061" containerName="placement-db-sync" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.530664 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dadab934-440c-4b33-8f4a-1790b0040061" containerName="placement-db-sync" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.531583 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.533268 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.534141 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.534161 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2dm2d" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.534278 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.534413 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.542091 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cd786c776-rmj8k"] Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.676552 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-config-data\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.676654 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-public-tls-certs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.676702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-scripts\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.676745 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-internal-tls-certs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.676776 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-combined-ca-bundle\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.676804 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdwg\" (UniqueName: \"kubernetes.io/projected/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-kube-api-access-bqdwg\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.676890 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-logs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.778772 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-combined-ca-bundle\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.778815 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdwg\" (UniqueName: \"kubernetes.io/projected/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-kube-api-access-bqdwg\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.778884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-logs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.778919 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-config-data\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.778965 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-public-tls-certs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.778992 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-scripts\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.779018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-internal-tls-certs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.780644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-logs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.784345 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-public-tls-certs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.785004 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-combined-ca-bundle\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.785644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-config-data\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.786172 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-scripts\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.794344 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-internal-tls-certs\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.806893 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdwg\" (UniqueName: \"kubernetes.io/projected/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-kube-api-access-bqdwg\") pod \"placement-5cd786c776-rmj8k\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:12 crc kubenswrapper[4772]: I1122 10:59:12.867527 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:13 crc kubenswrapper[4772]: I1122 10:59:13.379020 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cd786c776-rmj8k"] Nov 22 10:59:13 crc kubenswrapper[4772]: W1122 10:59:13.391121 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec4f25b4_f811_4d5a_8d3e_00e8adbd5bc7.slice/crio-c4b8d10e4960a23d8266b692f1975f266a727e1c5d93dba9ff2f57317ee82d59 WatchSource:0}: Error finding container c4b8d10e4960a23d8266b692f1975f266a727e1c5d93dba9ff2f57317ee82d59: Status 404 returned error can't find the container with id c4b8d10e4960a23d8266b692f1975f266a727e1c5d93dba9ff2f57317ee82d59 Nov 22 10:59:13 crc kubenswrapper[4772]: I1122 10:59:13.483314 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd786c776-rmj8k" event={"ID":"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7","Type":"ContainerStarted","Data":"c4b8d10e4960a23d8266b692f1975f266a727e1c5d93dba9ff2f57317ee82d59"} Nov 22 10:59:15 crc kubenswrapper[4772]: I1122 10:59:15.500144 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd786c776-rmj8k" event={"ID":"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7","Type":"ContainerStarted","Data":"b95964ad218161628ba9d6df7f28f6d82327565a7c20500e4082e1b8d7b0c9c3"} Nov 22 10:59:16 crc kubenswrapper[4772]: I1122 10:59:16.513762 4772 generic.go:334] "Generic (PLEG): container finished" podID="0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" containerID="f465d380c2327b6cd250787f627fb5fa4acf31137a261406640758672be2bd67" exitCode=0 Nov 22 10:59:16 crc kubenswrapper[4772]: I1122 10:59:16.513871 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xnbhf" event={"ID":"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87","Type":"ContainerDied","Data":"f465d380c2327b6cd250787f627fb5fa4acf31137a261406640758672be2bd67"} Nov 22 10:59:18 crc kubenswrapper[4772]: I1122 10:59:18.120302 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:18 crc kubenswrapper[4772]: I1122 10:59:18.171924 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hnd54"] Nov 22 10:59:18 crc kubenswrapper[4772]: I1122 10:59:18.172455 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="dnsmasq-dns" containerID="cri-o://5b382f99d6a32e22ab31c0364ec79664de10152f7be9b901a5494a01633fa59c" gracePeriod=10 Nov 22 10:59:19 crc kubenswrapper[4772]: I1122 10:59:19.561006 4772 generic.go:334] "Generic (PLEG): container finished" podID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerID="5b382f99d6a32e22ab31c0364ec79664de10152f7be9b901a5494a01633fa59c" exitCode=0 Nov 22 10:59:19 crc kubenswrapper[4772]: I1122 10:59:19.561398 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" event={"ID":"93964ee5-e17a-4689-9fb7-a68e5542b91b","Type":"ContainerDied","Data":"5b382f99d6a32e22ab31c0364ec79664de10152f7be9b901a5494a01633fa59c"} Nov 22 10:59:21 crc kubenswrapper[4772]: I1122 10:59:21.058388 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Nov 22 10:59:22 crc kubenswrapper[4772]: I1122 10:59:22.585531 4772 generic.go:334] "Generic (PLEG): container finished" podID="1c352191-5a61-4dc6-ba16-6c82cb0fdedf" containerID="308573355d3c23e3680c3cf9e647dd107983fe8aae70c703e63d2815a1d712b3" exitCode=0 Nov 22 10:59:22 crc kubenswrapper[4772]: I1122 10:59:22.585612 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mr98f" event={"ID":"1c352191-5a61-4dc6-ba16-6c82cb0fdedf","Type":"ContainerDied","Data":"308573355d3c23e3680c3cf9e647dd107983fe8aae70c703e63d2815a1d712b3"} Nov 22 10:59:26 crc kubenswrapper[4772]: I1122 10:59:26.058800 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Nov 22 10:59:29 crc kubenswrapper[4772]: E1122 10:59:29.426423 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 22 10:59:29 crc kubenswrapper[4772]: E1122 10:59:29.427109 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rktqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6mvdg_openstack(dfb3b51a-da06-4a18-bc47-225aa06fff04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 10:59:29 crc kubenswrapper[4772]: E1122 10:59:29.428255 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6mvdg" podUID="dfb3b51a-da06-4a18-bc47-225aa06fff04" Nov 22 10:59:29 crc kubenswrapper[4772]: E1122 10:59:29.650031 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6mvdg" podUID="dfb3b51a-da06-4a18-bc47-225aa06fff04" Nov 22 10:59:30 crc kubenswrapper[4772]: E1122 10:59:30.746976 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 22 10:59:30 crc kubenswrapper[4772]: E1122 10:59:30.747626 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bgld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6tjbt_openstack(146226df-eb2f-4fd3-a175-bccca5de564e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 10:59:30 crc kubenswrapper[4772]: E1122 10:59:30.749202 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6tjbt" podUID="146226df-eb2f-4fd3-a175-bccca5de564e" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.838507 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.844490 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mr98f" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.961402 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-combined-ca-bundle\") pod \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.961792 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-fernet-keys\") pod \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.961821 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-config-data\") pod \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.961861 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-config-data\") pod \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.961926 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-db-sync-config-data\") pod \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.961957 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5vl2\" (UniqueName: \"kubernetes.io/projected/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-kube-api-access-m5vl2\") pod \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.962013 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-credential-keys\") pod \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.962097 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xkwl\" (UniqueName: \"kubernetes.io/projected/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-kube-api-access-9xkwl\") pod \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.962142 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-combined-ca-bundle\") pod \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\" (UID: \"1c352191-5a61-4dc6-ba16-6c82cb0fdedf\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.962196 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-scripts\") pod \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\" (UID: \"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87\") " Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.966565 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" (UID: "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.967080 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" (UID: "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.967402 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-kube-api-access-9xkwl" (OuterVolumeSpecName: "kube-api-access-9xkwl") pod "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" (UID: "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87"). InnerVolumeSpecName "kube-api-access-9xkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.967916 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1c352191-5a61-4dc6-ba16-6c82cb0fdedf" (UID: "1c352191-5a61-4dc6-ba16-6c82cb0fdedf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.968230 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-kube-api-access-m5vl2" (OuterVolumeSpecName: "kube-api-access-m5vl2") pod "1c352191-5a61-4dc6-ba16-6c82cb0fdedf" (UID: "1c352191-5a61-4dc6-ba16-6c82cb0fdedf"). InnerVolumeSpecName "kube-api-access-m5vl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.969369 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-scripts" (OuterVolumeSpecName: "scripts") pod "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" (UID: "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.993389 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c352191-5a61-4dc6-ba16-6c82cb0fdedf" (UID: "1c352191-5a61-4dc6-ba16-6c82cb0fdedf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:30 crc kubenswrapper[4772]: I1122 10:59:30.995670 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-config-data" (OuterVolumeSpecName: "config-data") pod "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" (UID: "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:30.999903 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" (UID: "0a0e82a7-4323-4948-9e3c-3dc8a9df0c87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.035881 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-config-data" (OuterVolumeSpecName: "config-data") pod "1c352191-5a61-4dc6-ba16-6c82cb0fdedf" (UID: "1c352191-5a61-4dc6-ba16-6c82cb0fdedf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064102 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064137 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5vl2\" (UniqueName: \"kubernetes.io/projected/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-kube-api-access-m5vl2\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064149 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064158 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xkwl\" (UniqueName: \"kubernetes.io/projected/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-kube-api-access-9xkwl\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064167 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064177 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064185 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064192 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064200 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.064208 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c352191-5a61-4dc6-ba16-6c82cb0fdedf-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.669529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xnbhf" event={"ID":"0a0e82a7-4323-4948-9e3c-3dc8a9df0c87","Type":"ContainerDied","Data":"dcc33aafdc47ca5ac2ef453217b69b646363b496ad0166daca19a981bf6f1d77"} Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.669569 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc33aafdc47ca5ac2ef453217b69b646363b496ad0166daca19a981bf6f1d77" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.669837 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xnbhf" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.673113 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mr98f" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.673202 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mr98f" event={"ID":"1c352191-5a61-4dc6-ba16-6c82cb0fdedf","Type":"ContainerDied","Data":"44177f0432bc00beb55f9bb4ca979c22c959548480cf05acd0c31ca53f378d66"} Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.673232 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44177f0432bc00beb55f9bb4ca979c22c959548480cf05acd0c31ca53f378d66" Nov 22 10:59:31 crc kubenswrapper[4772]: E1122 10:59:31.674824 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6tjbt" podUID="146226df-eb2f-4fd3-a175-bccca5de564e" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.946850 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cfdd58ff7-mgd8m"] Nov 22 10:59:31 crc kubenswrapper[4772]: E1122 10:59:31.947287 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c352191-5a61-4dc6-ba16-6c82cb0fdedf" containerName="glance-db-sync" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.947302 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c352191-5a61-4dc6-ba16-6c82cb0fdedf" containerName="glance-db-sync" Nov 22 10:59:31 crc kubenswrapper[4772]: E1122 10:59:31.947319 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" containerName="keystone-bootstrap" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.947326 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" containerName="keystone-bootstrap" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.948177 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" containerName="keystone-bootstrap" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.948207 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c352191-5a61-4dc6-ba16-6c82cb0fdedf" containerName="glance-db-sync" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.948924 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.953255 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.953366 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.953255 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.953763 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.954365 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x8vg6" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.954349 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.987741 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-scripts\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.991413 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtrw\" (UniqueName: \"kubernetes.io/projected/020f49e7-c73f-460c-a068-75051e73cf90-kube-api-access-ldtrw\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.991765 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cfdd58ff7-mgd8m"] Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.996490 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-fernet-keys\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.998501 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-credential-keys\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.998683 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-public-tls-certs\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.998901 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-combined-ca-bundle\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.999236 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-config-data\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:31 crc kubenswrapper[4772]: I1122 10:59:31.999469 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-internal-tls-certs\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101130 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-fernet-keys\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101231 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-credential-keys\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101260 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-public-tls-certs\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101281 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-combined-ca-bundle\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101323 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-config-data\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101354 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-internal-tls-certs\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101397 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-scripts\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.101435 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtrw\" (UniqueName: \"kubernetes.io/projected/020f49e7-c73f-460c-a068-75051e73cf90-kube-api-access-ldtrw\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.109674 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-combined-ca-bundle\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.115745 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-fernet-keys\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.125942 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-config-data\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.126326 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-internal-tls-certs\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.128000 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-credential-keys\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.130456 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtrw\" (UniqueName: \"kubernetes.io/projected/020f49e7-c73f-460c-a068-75051e73cf90-kube-api-access-ldtrw\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.148113 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-public-tls-certs\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.152785 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-scripts\") pod \"keystone-cfdd58ff7-mgd8m\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.286119 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.288340 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-p75jw"] Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.290026 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.317925 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-p75jw"] Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.421715 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.422727 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.423441 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.423496 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.423521 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-config\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.423579 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mqgc\" (UniqueName: \"kubernetes.io/projected/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-kube-api-access-5mqgc\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.525503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.525567 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.525633 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.525674 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.525694 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-config\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.525729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mqgc\" (UniqueName: \"kubernetes.io/projected/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-kube-api-access-5mqgc\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.527002 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.527964 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.528722 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.529424 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-config\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.529584 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.555122 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mqgc\" (UniqueName: \"kubernetes.io/projected/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-kube-api-access-5mqgc\") pod \"dnsmasq-dns-8b5c85b87-p75jw\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:32 crc kubenswrapper[4772]: I1122 10:59:32.620683 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.374078 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.375511 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.383210 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.383516 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.383531 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9hdhd" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.393673 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.443754 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.443842 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-logs\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.445011 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fvj7\" (UniqueName: \"kubernetes.io/projected/1cf1dedf-ff9a-4899-a17d-b939e47db57a-kube-api-access-7fvj7\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.445123 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.445155 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.445245 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.446830 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.548377 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.548474 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.548525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.548547 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-logs\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.548594 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fvj7\" (UniqueName: \"kubernetes.io/projected/1cf1dedf-ff9a-4899-a17d-b939e47db57a-kube-api-access-7fvj7\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.548619 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.548637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.549184 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.549223 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-logs\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.549253 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.553015 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.562652 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.563294 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.569260 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fvj7\" (UniqueName: \"kubernetes.io/projected/1cf1dedf-ff9a-4899-a17d-b939e47db57a-kube-api-access-7fvj7\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.587270 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.706602 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.706826 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.709737 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.712853 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.723726 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.852692 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.852772 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.852978 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.853058 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.853131 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.853207 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xr5\" (UniqueName: \"kubernetes.io/projected/1f225972-7657-41e4-a018-cb464b1164e9-kube-api-access-b6xr5\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.853333 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.955198 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6xr5\" (UniqueName: \"kubernetes.io/projected/1f225972-7657-41e4-a018-cb464b1164e9-kube-api-access-b6xr5\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.955284 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.955312 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.955367 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.956128 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.956152 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.956192 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.956225 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.956742 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.956755 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.960173 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.960033 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.960637 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.978918 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6xr5\" (UniqueName: \"kubernetes.io/projected/1f225972-7657-41e4-a018-cb464b1164e9-kube-api-access-b6xr5\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:33 crc kubenswrapper[4772]: I1122 10:59:33.996632 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.043590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:34 crc kubenswrapper[4772]: E1122 10:59:34.197945 4772 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Nov 22 10:59:34 crc kubenswrapper[4772]: E1122 10:59:34.198167 4772 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25rml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d8c352bf-9815-42e1-8944-87a22e24c355): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.220107 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.364751 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9swt\" (UniqueName: \"kubernetes.io/projected/93964ee5-e17a-4689-9fb7-a68e5542b91b-kube-api-access-m9swt\") pod \"93964ee5-e17a-4689-9fb7-a68e5542b91b\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.365206 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-nb\") pod \"93964ee5-e17a-4689-9fb7-a68e5542b91b\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.366140 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-config\") pod \"93964ee5-e17a-4689-9fb7-a68e5542b91b\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.366183 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-dns-svc\") pod \"93964ee5-e17a-4689-9fb7-a68e5542b91b\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.366288 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-sb\") pod \"93964ee5-e17a-4689-9fb7-a68e5542b91b\" (UID: \"93964ee5-e17a-4689-9fb7-a68e5542b91b\") " Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.379362 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93964ee5-e17a-4689-9fb7-a68e5542b91b-kube-api-access-m9swt" (OuterVolumeSpecName: "kube-api-access-m9swt") pod "93964ee5-e17a-4689-9fb7-a68e5542b91b" (UID: "93964ee5-e17a-4689-9fb7-a68e5542b91b"). InnerVolumeSpecName "kube-api-access-m9swt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.467987 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9swt\" (UniqueName: \"kubernetes.io/projected/93964ee5-e17a-4689-9fb7-a68e5542b91b-kube-api-access-m9swt\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.539441 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-config" (OuterVolumeSpecName: "config") pod "93964ee5-e17a-4689-9fb7-a68e5542b91b" (UID: "93964ee5-e17a-4689-9fb7-a68e5542b91b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.552673 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93964ee5-e17a-4689-9fb7-a68e5542b91b" (UID: "93964ee5-e17a-4689-9fb7-a68e5542b91b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.571910 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.572193 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.576497 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93964ee5-e17a-4689-9fb7-a68e5542b91b" (UID: "93964ee5-e17a-4689-9fb7-a68e5542b91b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.581028 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.606137 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93964ee5-e17a-4689-9fb7-a68e5542b91b" (UID: "93964ee5-e17a-4689-9fb7-a68e5542b91b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.648635 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.674126 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.674469 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93964ee5-e17a-4689-9fb7-a68e5542b91b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.706454 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd786c776-rmj8k" event={"ID":"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7","Type":"ContainerStarted","Data":"c3ff3bf5f8075eb82c99bb11cb292366102815d9d517ae626765713d727efaff"} Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.706939 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.710937 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" event={"ID":"93964ee5-e17a-4689-9fb7-a68e5542b91b","Type":"ContainerDied","Data":"b085709ee7dc756f01b1789a4614ae09bb0fb11d007a4941c8aa270966f0843e"} Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.710991 4772 scope.go:117] "RemoveContainer" containerID="5b382f99d6a32e22ab31c0364ec79664de10152f7be9b901a5494a01633fa59c" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.711080 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.710938 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-5cd786c776-rmj8k" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.144:8778/\": dial tcp 10.217.0.144:8778: connect: connection refused" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.742753 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5cd786c776-rmj8k" podStartSLOduration=22.742731993 podStartE2EDuration="22.742731993s" podCreationTimestamp="2025-11-22 10:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:34.736651642 +0000 UTC m=+1294.976096146" watchObservedRunningTime="2025-11-22 10:59:34.742731993 +0000 UTC m=+1294.982176487" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.750400 4772 scope.go:117] "RemoveContainer" containerID="b2dc2b69f3a47028d0c5e56f8fbca0692cb197031cbdc10d58da149ae86b7d1d" Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.771761 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hnd54"] Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.777686 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hnd54"] Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.826208 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cfdd58ff7-mgd8m"] Nov 22 10:59:34 crc kubenswrapper[4772]: I1122 10:59:34.953806 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:34 crc kubenswrapper[4772]: W1122 10:59:34.960159 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cf1dedf_ff9a_4899_a17d_b939e47db57a.slice/crio-d568200d45a748aab242d9ce68b097a28afc7b3751a2f204225cc29a68ce76ca WatchSource:0}: Error finding container d568200d45a748aab242d9ce68b097a28afc7b3751a2f204225cc29a68ce76ca: Status 404 returned error can't find the container with id d568200d45a748aab242d9ce68b097a28afc7b3751a2f204225cc29a68ce76ca Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.028431 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-p75jw"] Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.041239 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:35 crc kubenswrapper[4772]: W1122 10:59:35.047797 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7afc372e_c3cc_4fea_b62d_e5bfc5750fa8.slice/crio-94e32536d1077af0cb3eebc7c239c91382a135737d51cc2b47d448c05feb7305 WatchSource:0}: Error finding container 94e32536d1077af0cb3eebc7c239c91382a135737d51cc2b47d448c05feb7305: Status 404 returned error can't find the container with id 94e32536d1077af0cb3eebc7c239c91382a135737d51cc2b47d448c05feb7305 Nov 22 10:59:35 crc kubenswrapper[4772]: W1122 10:59:35.053951 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f225972_7657_41e4_a018_cb464b1164e9.slice/crio-f8b84a34b31a46dc0f8208f4b9b5be29e5b5fd5966a457167630c77bd34f8421 WatchSource:0}: Error finding container f8b84a34b31a46dc0f8208f4b9b5be29e5b5fd5966a457167630c77bd34f8421: Status 404 returned error can't find the container with id f8b84a34b31a46dc0f8208f4b9b5be29e5b5fd5966a457167630c77bd34f8421 Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.438842 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" path="/var/lib/kubelet/pods/93964ee5-e17a-4689-9fb7-a68e5542b91b/volumes" Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.734998 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cfdd58ff7-mgd8m" event={"ID":"020f49e7-c73f-460c-a068-75051e73cf90","Type":"ContainerStarted","Data":"8e6b764b39cbb94f171e4a3905646fc7d88ee4093584bde74bbc1f1676a19df8"} Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.735602 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cfdd58ff7-mgd8m" event={"ID":"020f49e7-c73f-460c-a068-75051e73cf90","Type":"ContainerStarted","Data":"786aa308dc19bae115f095f03da57b3c3fcf2a13b679e54a1690daaa522f200c"} Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.738681 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.762760 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cfdd58ff7-mgd8m" podStartSLOduration=4.762737106 podStartE2EDuration="4.762737106s" podCreationTimestamp="2025-11-22 10:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:35.758806189 +0000 UTC m=+1295.998250713" watchObservedRunningTime="2025-11-22 10:59:35.762737106 +0000 UTC m=+1296.002181610" Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.781002 4772 generic.go:334] "Generic (PLEG): container finished" podID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerID="330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e" exitCode=0 Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.781158 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" event={"ID":"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8","Type":"ContainerDied","Data":"330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e"} Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.781193 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" event={"ID":"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8","Type":"ContainerStarted","Data":"94e32536d1077af0cb3eebc7c239c91382a135737d51cc2b47d448c05feb7305"} Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.795487 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1cf1dedf-ff9a-4899-a17d-b939e47db57a","Type":"ContainerStarted","Data":"c8144595610c6e7b3c2c6a4b2e48d73387a3e13625bae7d825a0bf6009ab836a"} Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.795538 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1cf1dedf-ff9a-4899-a17d-b939e47db57a","Type":"ContainerStarted","Data":"d568200d45a748aab242d9ce68b097a28afc7b3751a2f204225cc29a68ce76ca"} Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.797566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f225972-7657-41e4-a018-cb464b1164e9","Type":"ContainerStarted","Data":"f8b84a34b31a46dc0f8208f4b9b5be29e5b5fd5966a457167630c77bd34f8421"} Nov 22 10:59:35 crc kubenswrapper[4772]: I1122 10:59:35.797842 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.058334 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-hnd54" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.812980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" event={"ID":"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8","Type":"ContainerStarted","Data":"bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b"} Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.814677 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.816658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1cf1dedf-ff9a-4899-a17d-b939e47db57a","Type":"ContainerStarted","Data":"2293fb7ba191dd84b1db4053c7d405b66fa73eb7588fe6be142eab65cacce383"} Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.816761 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-log" containerID="cri-o://c8144595610c6e7b3c2c6a4b2e48d73387a3e13625bae7d825a0bf6009ab836a" gracePeriod=30 Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.816969 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-httpd" containerID="cri-o://2293fb7ba191dd84b1db4053c7d405b66fa73eb7588fe6be142eab65cacce383" gracePeriod=30 Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.821591 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f225972-7657-41e4-a018-cb464b1164e9","Type":"ContainerStarted","Data":"e4ca72a284735f20e63c4ba7e6d7fe626b29ac206adfeb65a675df331a59cb14"} Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.839589 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" podStartSLOduration=4.839569431 podStartE2EDuration="4.839569431s" podCreationTimestamp="2025-11-22 10:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:36.836799022 +0000 UTC m=+1297.076243526" watchObservedRunningTime="2025-11-22 10:59:36.839569431 +0000 UTC m=+1297.079013925" Nov 22 10:59:36 crc kubenswrapper[4772]: I1122 10:59:36.858651 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.858624844 podStartE2EDuration="4.858624844s" podCreationTimestamp="2025-11-22 10:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:36.855514297 +0000 UTC m=+1297.094958791" watchObservedRunningTime="2025-11-22 10:59:36.858624844 +0000 UTC m=+1297.098069338" Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.847672 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f225972-7657-41e4-a018-cb464b1164e9","Type":"ContainerStarted","Data":"c8156d4d90894d7e03b812adef952ac712e8c034874975758156306fbf6d2972"} Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.848500 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-httpd" containerID="cri-o://c8156d4d90894d7e03b812adef952ac712e8c034874975758156306fbf6d2972" gracePeriod=30 Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.850814 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-log" containerID="cri-o://e4ca72a284735f20e63c4ba7e6d7fe626b29ac206adfeb65a675df331a59cb14" gracePeriod=30 Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.854256 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1cf1dedf-ff9a-4899-a17d-b939e47db57a","Type":"ContainerDied","Data":"2293fb7ba191dd84b1db4053c7d405b66fa73eb7588fe6be142eab65cacce383"} Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.856114 4772 generic.go:334] "Generic (PLEG): container finished" podID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerID="2293fb7ba191dd84b1db4053c7d405b66fa73eb7588fe6be142eab65cacce383" exitCode=143 Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.856174 4772 generic.go:334] "Generic (PLEG): container finished" podID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerID="c8144595610c6e7b3c2c6a4b2e48d73387a3e13625bae7d825a0bf6009ab836a" exitCode=143 Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.856458 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1cf1dedf-ff9a-4899-a17d-b939e47db57a","Type":"ContainerDied","Data":"c8144595610c6e7b3c2c6a4b2e48d73387a3e13625bae7d825a0bf6009ab836a"} Nov 22 10:59:37 crc kubenswrapper[4772]: I1122 10:59:37.866842 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.866825406 podStartE2EDuration="5.866825406s" podCreationTimestamp="2025-11-22 10:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:37.865461372 +0000 UTC m=+1298.104905866" watchObservedRunningTime="2025-11-22 10:59:37.866825406 +0000 UTC m=+1298.106269900" Nov 22 10:59:38 crc kubenswrapper[4772]: I1122 10:59:38.288239 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:38 crc kubenswrapper[4772]: I1122 10:59:38.870540 4772 generic.go:334] "Generic (PLEG): container finished" podID="1f225972-7657-41e4-a018-cb464b1164e9" containerID="c8156d4d90894d7e03b812adef952ac712e8c034874975758156306fbf6d2972" exitCode=0 Nov 22 10:59:38 crc kubenswrapper[4772]: I1122 10:59:38.870825 4772 generic.go:334] "Generic (PLEG): container finished" podID="1f225972-7657-41e4-a018-cb464b1164e9" containerID="e4ca72a284735f20e63c4ba7e6d7fe626b29ac206adfeb65a675df331a59cb14" exitCode=143 Nov 22 10:59:38 crc kubenswrapper[4772]: I1122 10:59:38.870686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f225972-7657-41e4-a018-cb464b1164e9","Type":"ContainerDied","Data":"c8156d4d90894d7e03b812adef952ac712e8c034874975758156306fbf6d2972"} Nov 22 10:59:38 crc kubenswrapper[4772]: I1122 10:59:38.871143 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f225972-7657-41e4-a018-cb464b1164e9","Type":"ContainerDied","Data":"e4ca72a284735f20e63c4ba7e6d7fe626b29ac206adfeb65a675df331a59cb14"} Nov 22 10:59:42 crc kubenswrapper[4772]: I1122 10:59:42.622456 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 10:59:42 crc kubenswrapper[4772]: I1122 10:59:42.682913 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-w4qcg"] Nov 22 10:59:42 crc kubenswrapper[4772]: I1122 10:59:42.689305 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" containerName="dnsmasq-dns" containerID="cri-o://419852505e37afcd042d6210f307969de385e47d7f236ec304780e99ef2ce87e" gracePeriod=10 Nov 22 10:59:42 crc kubenswrapper[4772]: I1122 10:59:42.941576 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 10:59:43 crc kubenswrapper[4772]: I1122 10:59:43.119963 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 22 10:59:43 crc kubenswrapper[4772]: I1122 10:59:43.912670 4772 generic.go:334] "Generic (PLEG): container finished" podID="16102b8a-b1aa-43a6-b372-a042584f7279" containerID="419852505e37afcd042d6210f307969de385e47d7f236ec304780e99ef2ce87e" exitCode=0 Nov 22 10:59:43 crc kubenswrapper[4772]: I1122 10:59:43.912896 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" event={"ID":"16102b8a-b1aa-43a6-b372-a042584f7279","Type":"ContainerDied","Data":"419852505e37afcd042d6210f307969de385e47d7f236ec304780e99ef2ce87e"} Nov 22 10:59:43 crc kubenswrapper[4772]: I1122 10:59:43.915204 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f225972-7657-41e4-a018-cb464b1164e9","Type":"ContainerDied","Data":"f8b84a34b31a46dc0f8208f4b9b5be29e5b5fd5966a457167630c77bd34f8421"} Nov 22 10:59:43 crc kubenswrapper[4772]: I1122 10:59:43.915237 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b84a34b31a46dc0f8208f4b9b5be29e5b5fd5966a457167630c77bd34f8421" Nov 22 10:59:43 crc kubenswrapper[4772]: I1122 10:59:43.919300 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.066352 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1f225972-7657-41e4-a018-cb464b1164e9\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.067346 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6xr5\" (UniqueName: \"kubernetes.io/projected/1f225972-7657-41e4-a018-cb464b1164e9-kube-api-access-b6xr5\") pod \"1f225972-7657-41e4-a018-cb464b1164e9\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.067439 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-httpd-run\") pod \"1f225972-7657-41e4-a018-cb464b1164e9\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.067482 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-combined-ca-bundle\") pod \"1f225972-7657-41e4-a018-cb464b1164e9\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.067517 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-config-data\") pod \"1f225972-7657-41e4-a018-cb464b1164e9\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.067851 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-logs\") pod \"1f225972-7657-41e4-a018-cb464b1164e9\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.067890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-scripts\") pod \"1f225972-7657-41e4-a018-cb464b1164e9\" (UID: \"1f225972-7657-41e4-a018-cb464b1164e9\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.067844 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f225972-7657-41e4-a018-cb464b1164e9" (UID: "1f225972-7657-41e4-a018-cb464b1164e9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.068166 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-logs" (OuterVolumeSpecName: "logs") pod "1f225972-7657-41e4-a018-cb464b1164e9" (UID: "1f225972-7657-41e4-a018-cb464b1164e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.068462 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.068489 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f225972-7657-41e4-a018-cb464b1164e9-logs\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.072506 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-scripts" (OuterVolumeSpecName: "scripts") pod "1f225972-7657-41e4-a018-cb464b1164e9" (UID: "1f225972-7657-41e4-a018-cb464b1164e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.073631 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f225972-7657-41e4-a018-cb464b1164e9-kube-api-access-b6xr5" (OuterVolumeSpecName: "kube-api-access-b6xr5") pod "1f225972-7657-41e4-a018-cb464b1164e9" (UID: "1f225972-7657-41e4-a018-cb464b1164e9"). InnerVolumeSpecName "kube-api-access-b6xr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.073638 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "1f225972-7657-41e4-a018-cb464b1164e9" (UID: "1f225972-7657-41e4-a018-cb464b1164e9"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.110852 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f225972-7657-41e4-a018-cb464b1164e9" (UID: "1f225972-7657-41e4-a018-cb464b1164e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.132368 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-config-data" (OuterVolumeSpecName: "config-data") pod "1f225972-7657-41e4-a018-cb464b1164e9" (UID: "1f225972-7657-41e4-a018-cb464b1164e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.169793 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.169828 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.169846 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f225972-7657-41e4-a018-cb464b1164e9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.169878 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.169889 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6xr5\" (UniqueName: \"kubernetes.io/projected/1f225972-7657-41e4-a018-cb464b1164e9-kube-api-access-b6xr5\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.196536 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.272079 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.724755 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.889851 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.889991 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fvj7\" (UniqueName: \"kubernetes.io/projected/1cf1dedf-ff9a-4899-a17d-b939e47db57a-kube-api-access-7fvj7\") pod \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.890056 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-combined-ca-bundle\") pod \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.890088 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-config-data\") pod \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.890165 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-logs\") pod \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.890224 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-httpd-run\") pod \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.890264 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-scripts\") pod \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\" (UID: \"1cf1dedf-ff9a-4899-a17d-b939e47db57a\") " Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.890702 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-logs" (OuterVolumeSpecName: "logs") pod "1cf1dedf-ff9a-4899-a17d-b939e47db57a" (UID: "1cf1dedf-ff9a-4899-a17d-b939e47db57a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.891538 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1cf1dedf-ff9a-4899-a17d-b939e47db57a" (UID: "1cf1dedf-ff9a-4899-a17d-b939e47db57a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.894867 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-scripts" (OuterVolumeSpecName: "scripts") pod "1cf1dedf-ff9a-4899-a17d-b939e47db57a" (UID: "1cf1dedf-ff9a-4899-a17d-b939e47db57a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.896265 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "1cf1dedf-ff9a-4899-a17d-b939e47db57a" (UID: "1cf1dedf-ff9a-4899-a17d-b939e47db57a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.911010 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf1dedf-ff9a-4899-a17d-b939e47db57a-kube-api-access-7fvj7" (OuterVolumeSpecName: "kube-api-access-7fvj7") pod "1cf1dedf-ff9a-4899-a17d-b939e47db57a" (UID: "1cf1dedf-ff9a-4899-a17d-b939e47db57a"). InnerVolumeSpecName "kube-api-access-7fvj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.932661 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.932666 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.934454 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1cf1dedf-ff9a-4899-a17d-b939e47db57a","Type":"ContainerDied","Data":"d568200d45a748aab242d9ce68b097a28afc7b3751a2f204225cc29a68ce76ca"} Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.934596 4772 scope.go:117] "RemoveContainer" containerID="2293fb7ba191dd84b1db4053c7d405b66fa73eb7588fe6be142eab65cacce383" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.941394 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cf1dedf-ff9a-4899-a17d-b939e47db57a" (UID: "1cf1dedf-ff9a-4899-a17d-b939e47db57a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.979418 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-config-data" (OuterVolumeSpecName: "config-data") pod "1cf1dedf-ff9a-4899-a17d-b939e47db57a" (UID: "1cf1dedf-ff9a-4899-a17d-b939e47db57a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.992672 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fvj7\" (UniqueName: \"kubernetes.io/projected/1cf1dedf-ff9a-4899-a17d-b939e47db57a-kube-api-access-7fvj7\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.992706 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.992716 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.992726 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.992734 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf1dedf-ff9a-4899-a17d-b939e47db57a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.992741 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cf1dedf-ff9a-4899-a17d-b939e47db57a-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:44 crc kubenswrapper[4772]: I1122 10:59:44.992776 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.018320 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.080290 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.085216 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.096320 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.097001 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.101544 4772 scope.go:117] "RemoveContainer" containerID="c8144595610c6e7b3c2c6a4b2e48d73387a3e13625bae7d825a0bf6009ab836a" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.124354 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125031 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-log" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125073 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-log" Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125101 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-httpd" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125111 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-httpd" Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125124 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" containerName="init" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125133 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" containerName="init" Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125147 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-log" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125157 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-log" Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125185 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" containerName="dnsmasq-dns" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125194 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" containerName="dnsmasq-dns" Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125223 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="dnsmasq-dns" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125232 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="dnsmasq-dns" Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125255 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-httpd" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125265 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-httpd" Nov 22 10:59:45 crc kubenswrapper[4772]: E1122 10:59:45.125284 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="init" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125293 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="init" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125530 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-log" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125555 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" containerName="glance-httpd" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125573 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-log" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125585 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f225972-7657-41e4-a018-cb464b1164e9" containerName="glance-httpd" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125605 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="93964ee5-e17a-4689-9fb7-a68e5542b91b" containerName="dnsmasq-dns" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.125620 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" containerName="dnsmasq-dns" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.129464 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.138024 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.138839 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.159894 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.198591 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-svc\") pod \"16102b8a-b1aa-43a6-b372-a042584f7279\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.198768 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-swift-storage-0\") pod \"16102b8a-b1aa-43a6-b372-a042584f7279\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.198832 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-nb\") pod \"16102b8a-b1aa-43a6-b372-a042584f7279\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.198859 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-sb\") pod \"16102b8a-b1aa-43a6-b372-a042584f7279\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.198878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-config\") pod \"16102b8a-b1aa-43a6-b372-a042584f7279\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.198970 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h6cq\" (UniqueName: \"kubernetes.io/projected/16102b8a-b1aa-43a6-b372-a042584f7279-kube-api-access-8h6cq\") pod \"16102b8a-b1aa-43a6-b372-a042584f7279\" (UID: \"16102b8a-b1aa-43a6-b372-a042584f7279\") " Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.203808 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16102b8a-b1aa-43a6-b372-a042584f7279-kube-api-access-8h6cq" (OuterVolumeSpecName: "kube-api-access-8h6cq") pod "16102b8a-b1aa-43a6-b372-a042584f7279" (UID: "16102b8a-b1aa-43a6-b372-a042584f7279"). InnerVolumeSpecName "kube-api-access-8h6cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.249255 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16102b8a-b1aa-43a6-b372-a042584f7279" (UID: "16102b8a-b1aa-43a6-b372-a042584f7279"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.249945 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "16102b8a-b1aa-43a6-b372-a042584f7279" (UID: "16102b8a-b1aa-43a6-b372-a042584f7279"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.255397 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-config" (OuterVolumeSpecName: "config") pod "16102b8a-b1aa-43a6-b372-a042584f7279" (UID: "16102b8a-b1aa-43a6-b372-a042584f7279"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.265381 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16102b8a-b1aa-43a6-b372-a042584f7279" (UID: "16102b8a-b1aa-43a6-b372-a042584f7279"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.266992 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16102b8a-b1aa-43a6-b372-a042584f7279" (UID: "16102b8a-b1aa-43a6-b372-a042584f7279"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.286350 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.294437 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301343 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301422 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301466 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-logs\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301491 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301517 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9tx5\" (UniqueName: \"kubernetes.io/projected/06f66afb-564e-442a-b833-2d6db747986f-kube-api-access-l9tx5\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301568 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301588 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301642 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301759 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h6cq\" (UniqueName: \"kubernetes.io/projected/16102b8a-b1aa-43a6-b372-a042584f7279-kube-api-access-8h6cq\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301773 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301785 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301795 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301805 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.301816 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16102b8a-b1aa-43a6-b372-a042584f7279-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.310196 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.312148 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.318551 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.323702 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.325581 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403407 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403514 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403537 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-logs\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403555 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403574 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-config-data\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403589 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9tx5\" (UniqueName: \"kubernetes.io/projected/06f66afb-564e-442a-b833-2d6db747986f-kube-api-access-l9tx5\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403649 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403663 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-logs\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403679 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403697 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403732 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szcz2\" (UniqueName: \"kubernetes.io/projected/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-kube-api-access-szcz2\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403822 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.403846 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-scripts\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.405011 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.405079 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.405095 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-logs\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.408725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.409237 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.409781 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.410250 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.423034 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9tx5\" (UniqueName: \"kubernetes.io/projected/06f66afb-564e-442a-b833-2d6db747986f-kube-api-access-l9tx5\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.423237 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf1dedf-ff9a-4899-a17d-b939e47db57a" path="/var/lib/kubelet/pods/1cf1dedf-ff9a-4899-a17d-b939e47db57a/volumes" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.423871 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f225972-7657-41e4-a018-cb464b1164e9" path="/var/lib/kubelet/pods/1f225972-7657-41e4-a018-cb464b1164e9/volumes" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.431132 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.455320 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505076 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505131 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-scripts\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505192 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-config-data\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505247 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505287 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-logs\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505308 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505380 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szcz2\" (UniqueName: \"kubernetes.io/projected/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-kube-api-access-szcz2\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.505784 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.506060 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.506240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-logs\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.509917 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-scripts\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.510836 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-config-data\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.518901 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.520780 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.523257 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szcz2\" (UniqueName: \"kubernetes.io/projected/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-kube-api-access-szcz2\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.548921 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.630616 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.943142 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" event={"ID":"16102b8a-b1aa-43a6-b372-a042584f7279","Type":"ContainerDied","Data":"e396f396008ce3f5c784bc5b9c823f8d8217dd60a807562eed62b53d9927a629"} Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.943511 4772 scope.go:117] "RemoveContainer" containerID="419852505e37afcd042d6210f307969de385e47d7f236ec304780e99ef2ce87e" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.943188 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-w4qcg" Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.970554 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-w4qcg"] Nov 22 10:59:45 crc kubenswrapper[4772]: I1122 10:59:45.983221 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-w4qcg"] Nov 22 10:59:46 crc kubenswrapper[4772]: I1122 10:59:46.030193 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 10:59:46 crc kubenswrapper[4772]: W1122 10:59:46.556610 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06f66afb_564e_442a_b833_2d6db747986f.slice/crio-aad143134eb589a1a9361b04f1808f205b940d12f5bdf8bad9a1c5eca70ab188 WatchSource:0}: Error finding container aad143134eb589a1a9361b04f1808f205b940d12f5bdf8bad9a1c5eca70ab188: Status 404 returned error can't find the container with id aad143134eb589a1a9361b04f1808f205b940d12f5bdf8bad9a1c5eca70ab188 Nov 22 10:59:46 crc kubenswrapper[4772]: I1122 10:59:46.727201 4772 scope.go:117] "RemoveContainer" containerID="eb722a4822d7b2d62af9307137dd9b1084a78f49a081ec79fae0e31daf9e7a93" Nov 22 10:59:46 crc kubenswrapper[4772]: I1122 10:59:46.957620 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06f66afb-564e-442a-b833-2d6db747986f","Type":"ContainerStarted","Data":"aad143134eb589a1a9361b04f1808f205b940d12f5bdf8bad9a1c5eca70ab188"} Nov 22 10:59:47 crc kubenswrapper[4772]: E1122 10:59:47.086860 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.283589 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.426906 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16102b8a-b1aa-43a6-b372-a042584f7279" path="/var/lib/kubelet/pods/16102b8a-b1aa-43a6-b372-a042584f7279/volumes" Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.971246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6mvdg" event={"ID":"dfb3b51a-da06-4a18-bc47-225aa06fff04","Type":"ContainerStarted","Data":"5f3fb2bca327167ec8cd3bb459744d887ba97a57e65c5d2bd3152cd9834ea040"} Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.973634 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26cdd8cb-1271-4a41-83a7-782efd9a9aa7","Type":"ContainerStarted","Data":"f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c"} Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.973698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26cdd8cb-1271-4a41-83a7-782efd9a9aa7","Type":"ContainerStarted","Data":"ae3e325659dd192f8b1972ef3f3799b3f59c2dff4de08cdec9cac5accfc7d87c"} Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.983899 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06f66afb-564e-442a-b833-2d6db747986f","Type":"ContainerStarted","Data":"14231a0f6ec640aa6c40dfaa0db95ae4ea80eaaa3fc4ca5b123ac6999477d1f3"} Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.996830 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6mvdg" podStartSLOduration=4.279598717 podStartE2EDuration="41.996810283s" podCreationTimestamp="2025-11-22 10:59:06 +0000 UTC" firstStartedPulling="2025-11-22 10:59:09.026443753 +0000 UTC m=+1269.265888247" lastFinishedPulling="2025-11-22 10:59:46.743655309 +0000 UTC m=+1306.983099813" observedRunningTime="2025-11-22 10:59:47.987383248 +0000 UTC m=+1308.226827742" watchObservedRunningTime="2025-11-22 10:59:47.996810283 +0000 UTC m=+1308.236254777" Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.999333 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerStarted","Data":"97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c"} Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.999501 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-central-agent" containerID="cri-o://48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5" gracePeriod=30 Nov 22 10:59:47 crc kubenswrapper[4772]: I1122 10:59:47.999867 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 10:59:48 crc kubenswrapper[4772]: I1122 10:59:48.000192 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="proxy-httpd" containerID="cri-o://97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c" gracePeriod=30 Nov 22 10:59:48 crc kubenswrapper[4772]: I1122 10:59:48.000245 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-notification-agent" containerID="cri-o://53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66" gracePeriod=30 Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.014944 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06f66afb-564e-442a-b833-2d6db747986f","Type":"ContainerStarted","Data":"e510026a5b066cf92da22755c75a394db34caee61cb17079473b6dcce63ca369"} Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.021318 4772 generic.go:334] "Generic (PLEG): container finished" podID="d8c352bf-9815-42e1-8944-87a22e24c355" containerID="97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c" exitCode=0 Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.021354 4772 generic.go:334] "Generic (PLEG): container finished" podID="d8c352bf-9815-42e1-8944-87a22e24c355" containerID="48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5" exitCode=0 Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.021405 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerDied","Data":"97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c"} Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.021435 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerDied","Data":"48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5"} Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.027720 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tjbt" event={"ID":"146226df-eb2f-4fd3-a175-bccca5de564e","Type":"ContainerStarted","Data":"cfbc7ad9e142e1e14a3a3e0b9cba18bd1b43f25e9b244879f3cd28edcebc8de2"} Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.037279 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26cdd8cb-1271-4a41-83a7-782efd9a9aa7","Type":"ContainerStarted","Data":"2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3"} Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.057669 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.057649761 podStartE2EDuration="4.057649761s" podCreationTimestamp="2025-11-22 10:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:49.045032127 +0000 UTC m=+1309.284476611" watchObservedRunningTime="2025-11-22 10:59:49.057649761 +0000 UTC m=+1309.297094245" Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.089384 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.089364538 podStartE2EDuration="4.089364538s" podCreationTimestamp="2025-11-22 10:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:59:49.084670852 +0000 UTC m=+1309.324115356" watchObservedRunningTime="2025-11-22 10:59:49.089364538 +0000 UTC m=+1309.328809032" Nov 22 10:59:49 crc kubenswrapper[4772]: I1122 10:59:49.108159 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6tjbt" podStartSLOduration=4.060418043 podStartE2EDuration="43.108144835s" podCreationTimestamp="2025-11-22 10:59:06 +0000 UTC" firstStartedPulling="2025-11-22 10:59:09.055415942 +0000 UTC m=+1269.294860436" lastFinishedPulling="2025-11-22 10:59:48.103142734 +0000 UTC m=+1308.342587228" observedRunningTime="2025-11-22 10:59:49.103172841 +0000 UTC m=+1309.342617335" watchObservedRunningTime="2025-11-22 10:59:49.108144835 +0000 UTC m=+1309.347589329" Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.744459 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.945345 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-config-data\") pod \"d8c352bf-9815-42e1-8944-87a22e24c355\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.945418 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25rml\" (UniqueName: \"kubernetes.io/projected/d8c352bf-9815-42e1-8944-87a22e24c355-kube-api-access-25rml\") pod \"d8c352bf-9815-42e1-8944-87a22e24c355\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.945488 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-combined-ca-bundle\") pod \"d8c352bf-9815-42e1-8944-87a22e24c355\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.945542 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-run-httpd\") pod \"d8c352bf-9815-42e1-8944-87a22e24c355\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.945567 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-log-httpd\") pod \"d8c352bf-9815-42e1-8944-87a22e24c355\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.945668 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-sg-core-conf-yaml\") pod \"d8c352bf-9815-42e1-8944-87a22e24c355\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.945731 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-scripts\") pod \"d8c352bf-9815-42e1-8944-87a22e24c355\" (UID: \"d8c352bf-9815-42e1-8944-87a22e24c355\") " Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.946211 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8c352bf-9815-42e1-8944-87a22e24c355" (UID: "d8c352bf-9815-42e1-8944-87a22e24c355"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.946499 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8c352bf-9815-42e1-8944-87a22e24c355" (UID: "d8c352bf-9815-42e1-8944-87a22e24c355"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.951621 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8c352bf-9815-42e1-8944-87a22e24c355" (UID: "d8c352bf-9815-42e1-8944-87a22e24c355"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.952149 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c352bf-9815-42e1-8944-87a22e24c355-kube-api-access-25rml" (OuterVolumeSpecName: "kube-api-access-25rml") pod "d8c352bf-9815-42e1-8944-87a22e24c355" (UID: "d8c352bf-9815-42e1-8944-87a22e24c355"). InnerVolumeSpecName "kube-api-access-25rml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:52 crc kubenswrapper[4772]: I1122 10:59:52.953322 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-scripts" (OuterVolumeSpecName: "scripts") pod "d8c352bf-9815-42e1-8944-87a22e24c355" (UID: "d8c352bf-9815-42e1-8944-87a22e24c355"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.005640 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8c352bf-9815-42e1-8944-87a22e24c355" (UID: "d8c352bf-9815-42e1-8944-87a22e24c355"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.028760 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-config-data" (OuterVolumeSpecName: "config-data") pod "d8c352bf-9815-42e1-8944-87a22e24c355" (UID: "d8c352bf-9815-42e1-8944-87a22e24c355"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.048188 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.048223 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25rml\" (UniqueName: \"kubernetes.io/projected/d8c352bf-9815-42e1-8944-87a22e24c355-kube-api-access-25rml\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.048238 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.048249 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.048261 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8c352bf-9815-42e1-8944-87a22e24c355-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.048270 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.048281 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c352bf-9815-42e1-8944-87a22e24c355-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.074899 4772 generic.go:334] "Generic (PLEG): container finished" podID="d8c352bf-9815-42e1-8944-87a22e24c355" containerID="53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66" exitCode=0 Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.074972 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.074962 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerDied","Data":"53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66"} Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.075102 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8c352bf-9815-42e1-8944-87a22e24c355","Type":"ContainerDied","Data":"0333cf35d04fe6b62f418d030352a396d37f05779e6891e592f68b351fbfec0c"} Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.075125 4772 scope.go:117] "RemoveContainer" containerID="97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.113522 4772 scope.go:117] "RemoveContainer" containerID="53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.141560 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.151263 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.162998 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:59:53 crc kubenswrapper[4772]: E1122 10:59:53.163438 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-notification-agent" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.163454 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-notification-agent" Nov 22 10:59:53 crc kubenswrapper[4772]: E1122 10:59:53.163470 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="proxy-httpd" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.163476 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="proxy-httpd" Nov 22 10:59:53 crc kubenswrapper[4772]: E1122 10:59:53.163488 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-central-agent" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.163494 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-central-agent" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.163727 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-central-agent" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.163752 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="ceilometer-notification-agent" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.163769 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" containerName="proxy-httpd" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.165729 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.167247 4772 scope.go:117] "RemoveContainer" containerID="48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.169631 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.169822 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.176023 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.200398 4772 scope.go:117] "RemoveContainer" containerID="97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c" Nov 22 10:59:53 crc kubenswrapper[4772]: E1122 10:59:53.200916 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c\": container with ID starting with 97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c not found: ID does not exist" containerID="97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.200959 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c"} err="failed to get container status \"97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c\": rpc error: code = NotFound desc = could not find container \"97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c\": container with ID starting with 97746e5a86f803d1ce83211967e48a9ccbbeb6f2641c2358951974c39b37fb7c not found: ID does not exist" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.200989 4772 scope.go:117] "RemoveContainer" containerID="53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66" Nov 22 10:59:53 crc kubenswrapper[4772]: E1122 10:59:53.201321 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66\": container with ID starting with 53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66 not found: ID does not exist" containerID="53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.201363 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66"} err="failed to get container status \"53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66\": rpc error: code = NotFound desc = could not find container \"53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66\": container with ID starting with 53cea1f8353a8be9d5473072e40fcd172bef9fc1aae10e67a4a07e1bb1251d66 not found: ID does not exist" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.201393 4772 scope.go:117] "RemoveContainer" containerID="48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5" Nov 22 10:59:53 crc kubenswrapper[4772]: E1122 10:59:53.201712 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5\": container with ID starting with 48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5 not found: ID does not exist" containerID="48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.201740 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5"} err="failed to get container status \"48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5\": rpc error: code = NotFound desc = could not find container \"48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5\": container with ID starting with 48fbc585fc2d9876a20564649d6343249672db11054508fc537f2d970ccc75a5 not found: ID does not exist" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.362533 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-scripts\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.362576 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-config-data\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.362651 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.362921 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.362966 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-log-httpd\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.363089 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhm6\" (UniqueName: \"kubernetes.io/projected/6b8a1443-6167-4b78-8711-9e0a566004c7-kube-api-access-hbhm6\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.363162 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-run-httpd\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.423704 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c352bf-9815-42e1-8944-87a22e24c355" path="/var/lib/kubelet/pods/d8c352bf-9815-42e1-8944-87a22e24c355/volumes" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.464542 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhm6\" (UniqueName: \"kubernetes.io/projected/6b8a1443-6167-4b78-8711-9e0a566004c7-kube-api-access-hbhm6\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.464653 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-run-httpd\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.464725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-scripts\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.464745 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-config-data\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.464833 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.464869 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.464904 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-log-httpd\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.465373 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-run-httpd\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.465501 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-log-httpd\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.469867 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.470303 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-scripts\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.470760 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.473865 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-config-data\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.485309 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhm6\" (UniqueName: \"kubernetes.io/projected/6b8a1443-6167-4b78-8711-9e0a566004c7-kube-api-access-hbhm6\") pod \"ceilometer-0\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " pod="openstack/ceilometer-0" Nov 22 10:59:53 crc kubenswrapper[4772]: I1122 10:59:53.783697 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 10:59:54 crc kubenswrapper[4772]: I1122 10:59:54.208355 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 10:59:54 crc kubenswrapper[4772]: W1122 10:59:54.215863 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b8a1443_6167_4b78_8711_9e0a566004c7.slice/crio-1781b0598eab480bd2372bbd48dfff86da315214aefb2bfd4e6cd8428d7c15bc WatchSource:0}: Error finding container 1781b0598eab480bd2372bbd48dfff86da315214aefb2bfd4e6cd8428d7c15bc: Status 404 returned error can't find the container with id 1781b0598eab480bd2372bbd48dfff86da315214aefb2bfd4e6cd8428d7c15bc Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.094522 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerStarted","Data":"84353ae768bfbd6529c9bfae429c833e439f2cc6df58b069a29438776ce61917"} Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.095152 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerStarted","Data":"1781b0598eab480bd2372bbd48dfff86da315214aefb2bfd4e6cd8428d7c15bc"} Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.456013 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.456169 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.487644 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.513832 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.632217 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.632646 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.663370 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 10:59:55 crc kubenswrapper[4772]: I1122 10:59:55.675647 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 10:59:56 crc kubenswrapper[4772]: I1122 10:59:56.107081 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerStarted","Data":"64607a8e83f579d0f89ad96f482a1a7b7ce50a6b4017503ecb43a06b45d494b3"} Nov 22 10:59:56 crc kubenswrapper[4772]: I1122 10:59:56.109261 4772 generic.go:334] "Generic (PLEG): container finished" podID="146226df-eb2f-4fd3-a175-bccca5de564e" containerID="cfbc7ad9e142e1e14a3a3e0b9cba18bd1b43f25e9b244879f3cd28edcebc8de2" exitCode=0 Nov 22 10:59:56 crc kubenswrapper[4772]: I1122 10:59:56.109327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tjbt" event={"ID":"146226df-eb2f-4fd3-a175-bccca5de564e","Type":"ContainerDied","Data":"cfbc7ad9e142e1e14a3a3e0b9cba18bd1b43f25e9b244879f3cd28edcebc8de2"} Nov 22 10:59:56 crc kubenswrapper[4772]: I1122 10:59:56.110009 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:56 crc kubenswrapper[4772]: I1122 10:59:56.110221 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:56 crc kubenswrapper[4772]: I1122 10:59:56.110311 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 10:59:56 crc kubenswrapper[4772]: I1122 10:59:56.110389 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.118658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerStarted","Data":"581a123fdb71470fd3ebfa1e4ef00ccba778c4cfde04db7c91e1d8c565678271"} Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.120479 4772 generic.go:334] "Generic (PLEG): container finished" podID="dfb3b51a-da06-4a18-bc47-225aa06fff04" containerID="5f3fb2bca327167ec8cd3bb459744d887ba97a57e65c5d2bd3152cd9834ea040" exitCode=0 Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.120632 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6mvdg" event={"ID":"dfb3b51a-da06-4a18-bc47-225aa06fff04","Type":"ContainerDied","Data":"5f3fb2bca327167ec8cd3bb459744d887ba97a57e65c5d2bd3152cd9834ea040"} Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.537436 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.653445 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-db-sync-config-data\") pod \"146226df-eb2f-4fd3-a175-bccca5de564e\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.653494 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-combined-ca-bundle\") pod \"146226df-eb2f-4fd3-a175-bccca5de564e\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.653623 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bgld\" (UniqueName: \"kubernetes.io/projected/146226df-eb2f-4fd3-a175-bccca5de564e-kube-api-access-4bgld\") pod \"146226df-eb2f-4fd3-a175-bccca5de564e\" (UID: \"146226df-eb2f-4fd3-a175-bccca5de564e\") " Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.659472 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146226df-eb2f-4fd3-a175-bccca5de564e-kube-api-access-4bgld" (OuterVolumeSpecName: "kube-api-access-4bgld") pod "146226df-eb2f-4fd3-a175-bccca5de564e" (UID: "146226df-eb2f-4fd3-a175-bccca5de564e"). InnerVolumeSpecName "kube-api-access-4bgld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.659597 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "146226df-eb2f-4fd3-a175-bccca5de564e" (UID: "146226df-eb2f-4fd3-a175-bccca5de564e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.680733 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "146226df-eb2f-4fd3-a175-bccca5de564e" (UID: "146226df-eb2f-4fd3-a175-bccca5de564e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.755533 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bgld\" (UniqueName: \"kubernetes.io/projected/146226df-eb2f-4fd3-a175-bccca5de564e-kube-api-access-4bgld\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.755565 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:57 crc kubenswrapper[4772]: I1122 10:59:57.755576 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146226df-eb2f-4fd3-a175-bccca5de564e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.131477 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6tjbt" event={"ID":"146226df-eb2f-4fd3-a175-bccca5de564e","Type":"ContainerDied","Data":"40ad5979f2bf9848781e4440850db1580934fc17422d9d8969a86d0e8d907af7"} Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.131823 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ad5979f2bf9848781e4440850db1580934fc17422d9d8969a86d0e8d907af7" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.131512 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6tjbt" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.133900 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerStarted","Data":"1d6ce511858533d850829cf51dfa197fbd6da2c32ff765fe832bc228aa3b01cc"} Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.134099 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.136321 4772 generic.go:334] "Generic (PLEG): container finished" podID="fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" containerID="22e2f7cca48f17c69e56813ca2b405c534cda48058fd9a577adcf8d1882eb7e1" exitCode=0 Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.136415 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.136422 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.136423 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mzxx" event={"ID":"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb","Type":"ContainerDied","Data":"22e2f7cca48f17c69e56813ca2b405c534cda48058fd9a577adcf8d1882eb7e1"} Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.196211 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.255983196 podStartE2EDuration="5.196178941s" podCreationTimestamp="2025-11-22 10:59:53 +0000 UTC" firstStartedPulling="2025-11-22 10:59:54.218497209 +0000 UTC m=+1314.457941703" lastFinishedPulling="2025-11-22 10:59:57.158692954 +0000 UTC m=+1317.398137448" observedRunningTime="2025-11-22 10:59:58.183966737 +0000 UTC m=+1318.423411241" watchObservedRunningTime="2025-11-22 10:59:58.196178941 +0000 UTC m=+1318.435623455" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.298814 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.308109 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.320654 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.320749 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.466515 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5f797948bc-dk5pr"] Nov 22 10:59:58 crc kubenswrapper[4772]: E1122 10:59:58.467239 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146226df-eb2f-4fd3-a175-bccca5de564e" containerName="barbican-db-sync" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.467251 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="146226df-eb2f-4fd3-a175-bccca5de564e" containerName="barbican-db-sync" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.467440 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="146226df-eb2f-4fd3-a175-bccca5de564e" containerName="barbican-db-sync" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.468325 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.471230 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.471425 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pxhcj" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.471833 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.487799 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-79459755b6-xsvzh"] Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.489404 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.491458 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.502894 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5f797948bc-dk5pr"] Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.521449 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79459755b6-xsvzh"] Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.563320 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-6lqp8"] Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.564721 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.574273 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-combined-ca-bundle\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.574536 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-logs\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.574656 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c994b4f-e182-481a-a3ba-17dc9656c70c-logs\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.574815 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2zlx\" (UniqueName: \"kubernetes.io/projected/1c994b4f-e182-481a-a3ba-17dc9656c70c-kube-api-access-q2zlx\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.574961 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9sn\" (UniqueName: \"kubernetes.io/projected/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-kube-api-access-wf9sn\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.575088 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-combined-ca-bundle\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.575218 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.575326 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data-custom\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.575418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data-custom\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.575557 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.612990 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dccb6fcbd-htjnx"] Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.614456 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.616780 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.629950 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-6lqp8"] Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.669658 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dccb6fcbd-htjnx"] Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677633 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-combined-ca-bundle\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677664 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-combined-ca-bundle\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677683 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-logs\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677718 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c994b4f-e182-481a-a3ba-17dc9656c70c-logs\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677738 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2zlx\" (UniqueName: \"kubernetes.io/projected/1c994b4f-e182-481a-a3ba-17dc9656c70c-kube-api-access-q2zlx\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677767 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf9sn\" (UniqueName: \"kubernetes.io/projected/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-kube-api-access-wf9sn\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-combined-ca-bundle\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677811 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qnjt\" (UniqueName: \"kubernetes.io/projected/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-kube-api-access-9qnjt\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677829 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.677961 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678040 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw67l\" (UniqueName: \"kubernetes.io/projected/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-kube-api-access-sw67l\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678111 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678202 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data-custom\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678228 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data-custom\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data-custom\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678342 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-config\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678371 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678424 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678457 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-logs\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.678839 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c994b4f-e182-481a-a3ba-17dc9656c70c-logs\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.680036 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-logs\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.691999 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.692953 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-combined-ca-bundle\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.695197 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.697338 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-combined-ca-bundle\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.698493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data-custom\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.701136 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data-custom\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.710664 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf9sn\" (UniqueName: \"kubernetes.io/projected/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-kube-api-access-wf9sn\") pod \"barbican-worker-5f797948bc-dk5pr\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.713531 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2zlx\" (UniqueName: \"kubernetes.io/projected/1c994b4f-e182-481a-a3ba-17dc9656c70c-kube-api-access-q2zlx\") pod \"barbican-keystone-listener-79459755b6-xsvzh\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779523 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw67l\" (UniqueName: \"kubernetes.io/projected/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-kube-api-access-sw67l\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779570 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data-custom\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779648 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-config\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779694 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-logs\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779761 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779792 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-combined-ca-bundle\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qnjt\" (UniqueName: \"kubernetes.io/projected/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-kube-api-access-9qnjt\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779863 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.779885 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.780712 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.781318 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-logs\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.781967 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-config\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.782941 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.783159 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.783493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.788618 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.790665 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data-custom\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.794617 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.795629 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-combined-ca-bundle\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.806316 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.814999 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw67l\" (UniqueName: \"kubernetes.io/projected/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-kube-api-access-sw67l\") pod \"dnsmasq-dns-59d5ff467f-6lqp8\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.815747 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qnjt\" (UniqueName: \"kubernetes.io/projected/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-kube-api-access-9qnjt\") pod \"barbican-api-7dccb6fcbd-htjnx\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.821063 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.843548 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.885973 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-db-sync-config-data\") pod \"dfb3b51a-da06-4a18-bc47-225aa06fff04\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.886521 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-scripts\") pod \"dfb3b51a-da06-4a18-bc47-225aa06fff04\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.887101 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-combined-ca-bundle\") pod \"dfb3b51a-da06-4a18-bc47-225aa06fff04\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.887154 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-config-data\") pod \"dfb3b51a-da06-4a18-bc47-225aa06fff04\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.887351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rktqf\" (UniqueName: \"kubernetes.io/projected/dfb3b51a-da06-4a18-bc47-225aa06fff04-kube-api-access-rktqf\") pod \"dfb3b51a-da06-4a18-bc47-225aa06fff04\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.887379 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb3b51a-da06-4a18-bc47-225aa06fff04-etc-machine-id\") pod \"dfb3b51a-da06-4a18-bc47-225aa06fff04\" (UID: \"dfb3b51a-da06-4a18-bc47-225aa06fff04\") " Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.889830 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb3b51a-da06-4a18-bc47-225aa06fff04-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dfb3b51a-da06-4a18-bc47-225aa06fff04" (UID: "dfb3b51a-da06-4a18-bc47-225aa06fff04"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.891610 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dfb3b51a-da06-4a18-bc47-225aa06fff04" (UID: "dfb3b51a-da06-4a18-bc47-225aa06fff04"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.896545 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb3b51a-da06-4a18-bc47-225aa06fff04-kube-api-access-rktqf" (OuterVolumeSpecName: "kube-api-access-rktqf") pod "dfb3b51a-da06-4a18-bc47-225aa06fff04" (UID: "dfb3b51a-da06-4a18-bc47-225aa06fff04"). InnerVolumeSpecName "kube-api-access-rktqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.904202 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-scripts" (OuterVolumeSpecName: "scripts") pod "dfb3b51a-da06-4a18-bc47-225aa06fff04" (UID: "dfb3b51a-da06-4a18-bc47-225aa06fff04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.904665 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.947627 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.978199 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfb3b51a-da06-4a18-bc47-225aa06fff04" (UID: "dfb3b51a-da06-4a18-bc47-225aa06fff04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.992020 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.992062 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.992072 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.992082 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rktqf\" (UniqueName: \"kubernetes.io/projected/dfb3b51a-da06-4a18-bc47-225aa06fff04-kube-api-access-rktqf\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:58 crc kubenswrapper[4772]: I1122 10:59:58.992092 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb3b51a-da06-4a18-bc47-225aa06fff04-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.017827 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-config-data" (OuterVolumeSpecName: "config-data") pod "dfb3b51a-da06-4a18-bc47-225aa06fff04" (UID: "dfb3b51a-da06-4a18-bc47-225aa06fff04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.094062 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb3b51a-da06-4a18-bc47-225aa06fff04-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.155204 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6mvdg" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.155877 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6mvdg" event={"ID":"dfb3b51a-da06-4a18-bc47-225aa06fff04","Type":"ContainerDied","Data":"1e39e5a74a67af0aa3470c7d3736b924d127e6de8b05e3f56afd40c0a34bd6ad"} Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.155928 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e39e5a74a67af0aa3470c7d3736b924d127e6de8b05e3f56afd40c0a34bd6ad" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.498664 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 10:59:59 crc kubenswrapper[4772]: E1122 10:59:59.500794 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb3b51a-da06-4a18-bc47-225aa06fff04" containerName="cinder-db-sync" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.500809 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb3b51a-da06-4a18-bc47-225aa06fff04" containerName="cinder-db-sync" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.500958 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb3b51a-da06-4a18-bc47-225aa06fff04" containerName="cinder-db-sync" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.501846 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.501938 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.508673 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.509580 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6rrs9" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.509694 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.509838 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.515530 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-6lqp8"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.559317 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-smrdw"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.561510 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.614805 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-smrdw"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.650434 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-scripts\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654417 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9577ec77-2954-4ff8-8de2-d965cce60a04-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654464 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62b4\" (UniqueName: \"kubernetes.io/projected/0fabef51-0232-441a-ab7f-487c2ef79d04-kube-api-access-n62b4\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654536 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654570 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654644 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654704 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t4sw\" (UniqueName: \"kubernetes.io/projected/9577ec77-2954-4ff8-8de2-d965cce60a04-kube-api-access-8t4sw\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654764 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-config\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654808 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654886 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.654931 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.718334 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.721358 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.727489 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.733009 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9577ec77-2954-4ff8-8de2-d965cce60a04-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757638 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62b4\" (UniqueName: \"kubernetes.io/projected/0fabef51-0232-441a-ab7f-487c2ef79d04-kube-api-access-n62b4\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757671 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757691 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757707 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t4sw\" (UniqueName: \"kubernetes.io/projected/9577ec77-2954-4ff8-8de2-d965cce60a04-kube-api-access-8t4sw\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757754 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-config\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757776 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757809 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757832 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757874 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.757903 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-scripts\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.759032 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9577ec77-2954-4ff8-8de2-d965cce60a04-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.764135 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-config\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.764640 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.765211 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.765745 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.766330 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.785502 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.798990 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-scripts\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.799022 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62b4\" (UniqueName: \"kubernetes.io/projected/0fabef51-0232-441a-ab7f-487c2ef79d04-kube-api-access-n62b4\") pod \"dnsmasq-dns-69c986f6d7-smrdw\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.799818 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.800988 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t4sw\" (UniqueName: \"kubernetes.io/projected/9577ec77-2954-4ff8-8de2-d965cce60a04-kube-api-access-8t4sw\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.801913 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data\") pod \"cinder-scheduler-0\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.849741 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.862692 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxzq\" (UniqueName: \"kubernetes.io/projected/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-kube-api-access-7fxzq\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.862727 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.862783 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.862835 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.862868 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.862893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-scripts\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.862909 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-logs\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.899129 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-79459755b6-xsvzh"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.919880 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.931235 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-6lqp8"] Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964169 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-scripts\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964218 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-logs\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964277 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fxzq\" (UniqueName: \"kubernetes.io/projected/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-kube-api-access-7fxzq\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964295 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964352 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964416 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964455 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.964565 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.967774 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-scripts\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.968038 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-logs\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.974027 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.975309 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.984828 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 10:59:59 crc kubenswrapper[4772]: I1122 10:59:59.997197 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fxzq\" (UniqueName: \"kubernetes.io/projected/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-kube-api-access-7fxzq\") pod \"cinder-api-0\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " pod="openstack/cinder-api-0" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.029817 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mzxx" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.064485 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.098390 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5f797948bc-dk5pr"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.118783 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dccb6fcbd-htjnx"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.148162 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf"] Nov 22 11:00:00 crc kubenswrapper[4772]: E1122 11:00:00.148523 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" containerName="neutron-db-sync" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.148535 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" containerName="neutron-db-sync" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.148695 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" containerName="neutron-db-sync" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.150764 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.157667 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.157818 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.168221 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-config\") pod \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.168340 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-549qt\" (UniqueName: \"kubernetes.io/projected/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-kube-api-access-549qt\") pod \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.168523 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-combined-ca-bundle\") pod \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\" (UID: \"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb\") " Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.184574 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-kube-api-access-549qt" (OuterVolumeSpecName: "kube-api-access-549qt") pod "fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" (UID: "fb489d0e-dc04-4a25-8e89-ec9ede81a3cb"). InnerVolumeSpecName "kube-api-access-549qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.185333 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f797948bc-dk5pr" event={"ID":"027dc32b-06dd-45bf-9aad-8e0c92b44a2b","Type":"ContainerStarted","Data":"32687ed9dbee22e504bb16cb3da9dcafc367491193ac75c34bed15a7cae78a1e"} Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.195778 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-549qt\" (UniqueName: \"kubernetes.io/projected/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-kube-api-access-549qt\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.198473 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.256223 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-config" (OuterVolumeSpecName: "config") pod "fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" (UID: "fb489d0e-dc04-4a25-8e89-ec9ede81a3cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.256719 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" event={"ID":"1c994b4f-e182-481a-a3ba-17dc9656c70c","Type":"ContainerStarted","Data":"daeca1907ee532d9e66ed188e4355f4840d194cba884c816d09e13c98ee51293"} Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.272959 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" event={"ID":"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7","Type":"ContainerStarted","Data":"134fdd6cb563fdcfd962a02d3dfff9a73069925c0a1557848ad85d494b5e000f"} Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.282040 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" (UID: "fb489d0e-dc04-4a25-8e89-ec9ede81a3cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.284269 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9mzxx" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.287246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9mzxx" event={"ID":"fb489d0e-dc04-4a25-8e89-ec9ede81a3cb","Type":"ContainerDied","Data":"9ef28084ddd0cc402021285ea95a9abfc3c2e2f5fddc3cf21e5dba947e11dce2"} Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.287308 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef28084ddd0cc402021285ea95a9abfc3c2e2f5fddc3cf21e5dba947e11dce2" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.298069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzkj\" (UniqueName: \"kubernetes.io/projected/75820045-4178-4355-926b-6a7f0effd0fc-kube-api-access-gtzkj\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.298136 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75820045-4178-4355-926b-6a7f0effd0fc-secret-volume\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.298161 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75820045-4178-4355-926b-6a7f0effd0fc-config-volume\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.298216 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.298236 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.402222 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75820045-4178-4355-926b-6a7f0effd0fc-secret-volume\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.402608 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75820045-4178-4355-926b-6a7f0effd0fc-config-volume\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.408438 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtzkj\" (UniqueName: \"kubernetes.io/projected/75820045-4178-4355-926b-6a7f0effd0fc-kube-api-access-gtzkj\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.421841 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75820045-4178-4355-926b-6a7f0effd0fc-config-volume\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.447851 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75820045-4178-4355-926b-6a7f0effd0fc-secret-volume\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.453430 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtzkj\" (UniqueName: \"kubernetes.io/projected/75820045-4178-4355-926b-6a7f0effd0fc-kube-api-access-gtzkj\") pod \"collect-profiles-29396820-g7jnf\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.490658 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-smrdw"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.573197 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-klkn5"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.662703 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-klkn5"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.663868 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.681944 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.705081 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-65d7b679bd-q7h6t"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.706881 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.709969 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.711411 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.711689 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.711869 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fldv2" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.746291 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65d7b679bd-q7h6t"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.816729 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-smrdw"] Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823296 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmfwb\" (UniqueName: \"kubernetes.io/projected/545348ad-f752-4463-96a2-353ba4ac1b57-kube-api-access-lmfwb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823416 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qk4\" (UniqueName: \"kubernetes.io/projected/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-kube-api-access-s5qk4\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823468 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823508 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823548 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823573 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-config\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823597 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-httpd-config\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823636 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-combined-ca-bundle\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823664 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-ovndb-tls-certs\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.823712 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-config\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.842626 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:00:00 crc kubenswrapper[4772]: W1122 11:00:00.847957 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fabef51_0232_441a_ab7f_487c2ef79d04.slice/crio-0e84cbea362a2508102edf728baa61a550dec04be952d6d5e93593820ec5c4ac WatchSource:0}: Error finding container 0e84cbea362a2508102edf728baa61a550dec04be952d6d5e93593820ec5c4ac: Status 404 returned error can't find the container with id 0e84cbea362a2508102edf728baa61a550dec04be952d6d5e93593820ec5c4ac Nov 22 11:00:00 crc kubenswrapper[4772]: W1122 11:00:00.856245 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9577ec77_2954_4ff8_8de2_d965cce60a04.slice/crio-73c391969f98a95e8c3fc079f910500fd6491674eb5890b8af2fa78449fd9b2d WatchSource:0}: Error finding container 73c391969f98a95e8c3fc079f910500fd6491674eb5890b8af2fa78449fd9b2d: Status 404 returned error can't find the container with id 73c391969f98a95e8c3fc079f910500fd6491674eb5890b8af2fa78449fd9b2d Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925016 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925149 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925209 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-config\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925285 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-httpd-config\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925326 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-combined-ca-bundle\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925354 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-ovndb-tls-certs\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925402 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-config\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925433 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925468 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmfwb\" (UniqueName: \"kubernetes.io/projected/545348ad-f752-4463-96a2-353ba4ac1b57-kube-api-access-lmfwb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925593 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qk4\" (UniqueName: \"kubernetes.io/projected/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-kube-api-access-s5qk4\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.925923 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.927398 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.927908 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.928708 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-config\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.930772 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.936158 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-httpd-config\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.936382 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-combined-ca-bundle\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.938548 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-ovndb-tls-certs\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.947551 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-config\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.954898 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qk4\" (UniqueName: \"kubernetes.io/projected/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-kube-api-access-s5qk4\") pod \"neutron-65d7b679bd-q7h6t\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:00 crc kubenswrapper[4772]: I1122 11:00:00.960319 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmfwb\" (UniqueName: \"kubernetes.io/projected/545348ad-f752-4463-96a2-353ba4ac1b57-kube-api-access-lmfwb\") pod \"dnsmasq-dns-5784cf869f-klkn5\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.005884 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.017667 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.053240 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.328321 4772 generic.go:334] "Generic (PLEG): container finished" podID="76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" containerID="05b35a0a006322f98459d21fb518c2d22eb72472d4c83223f862906986adcc94" exitCode=0 Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.328430 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" event={"ID":"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7","Type":"ContainerDied","Data":"05b35a0a006322f98459d21fb518c2d22eb72472d4c83223f862906986adcc94"} Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.331361 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf"] Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.333717 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccb6fcbd-htjnx" event={"ID":"1485651d-d1ff-4ef4-88fe-0ab6dd041df4","Type":"ContainerStarted","Data":"37a9d8a310b935b1d510763c6e8587e3aeae6450c618faa11c5eb2043b5a0031"} Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.333861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccb6fcbd-htjnx" event={"ID":"1485651d-d1ff-4ef4-88fe-0ab6dd041df4","Type":"ContainerStarted","Data":"5fc52c3e974bad0e0ae96bb7df63a71487c1b5453308b07ef82a29d625efa8d5"} Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.333936 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccb6fcbd-htjnx" event={"ID":"1485651d-d1ff-4ef4-88fe-0ab6dd041df4","Type":"ContainerStarted","Data":"72b6bb1796f3b280093f3b9133455e17c3e742ef94cbedc7c831b881f29373bc"} Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.334269 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.335115 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.336777 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9577ec77-2954-4ff8-8de2-d965cce60a04","Type":"ContainerStarted","Data":"73c391969f98a95e8c3fc079f910500fd6491674eb5890b8af2fa78449fd9b2d"} Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.340986 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ce6a4db-47b5-4582-9b60-27d1a1485ef1","Type":"ContainerStarted","Data":"236c2c30a9b7b8fc6bfe4de9a0b8d30fe7541fcb821f08d6228f325d6be6ac1b"} Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.363144 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" event={"ID":"0fabef51-0232-441a-ab7f-487c2ef79d04","Type":"ContainerStarted","Data":"0e84cbea362a2508102edf728baa61a550dec04be952d6d5e93593820ec5c4ac"} Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.413304 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dccb6fcbd-htjnx" podStartSLOduration=3.413286374 podStartE2EDuration="3.413286374s" podCreationTimestamp="2025-11-22 10:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:01.396193129 +0000 UTC m=+1321.635637633" watchObservedRunningTime="2025-11-22 11:00:01.413286374 +0000 UTC m=+1321.652730858" Nov 22 11:00:01 crc kubenswrapper[4772]: I1122 11:00:01.635255 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-klkn5"] Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.118919 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65d7b679bd-q7h6t"] Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.127817 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 11:00:02 crc kubenswrapper[4772]: W1122 11:00:02.137641 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ece6c8_49c6_468f_b579_6bd9b2bea8bd.slice/crio-3f3da895623f9b3cef43c42cb364109c6134a2579869fc1cf0d63581b7931cf2 WatchSource:0}: Error finding container 3f3da895623f9b3cef43c42cb364109c6134a2579869fc1cf0d63581b7931cf2: Status 404 returned error can't find the container with id 3f3da895623f9b3cef43c42cb364109c6134a2579869fc1cf0d63581b7931cf2 Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.285704 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-sb\") pod \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.286596 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-svc\") pod \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.286726 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw67l\" (UniqueName: \"kubernetes.io/projected/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-kube-api-access-sw67l\") pod \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.286820 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-config\") pod \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.286939 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-swift-storage-0\") pod \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.287087 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb\") pod \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.352491 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-kube-api-access-sw67l" (OuterVolumeSpecName: "kube-api-access-sw67l") pod "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" (UID: "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7"). InnerVolumeSpecName "kube-api-access-sw67l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.403342 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw67l\" (UniqueName: \"kubernetes.io/projected/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-kube-api-access-sw67l\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.417311 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65d7b679bd-q7h6t" event={"ID":"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd","Type":"ContainerStarted","Data":"3f3da895623f9b3cef43c42cb364109c6134a2579869fc1cf0d63581b7931cf2"} Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.418662 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" (UID: "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.454542 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.454848 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-6lqp8" event={"ID":"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7","Type":"ContainerDied","Data":"134fdd6cb563fdcfd962a02d3dfff9a73069925c0a1557848ad85d494b5e000f"} Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.454995 4772 scope.go:117] "RemoveContainer" containerID="05b35a0a006322f98459d21fb518c2d22eb72472d4c83223f862906986adcc94" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.476694 4772 generic.go:334] "Generic (PLEG): container finished" podID="0fabef51-0232-441a-ab7f-487c2ef79d04" containerID="ef3c10e1127dfea68f724f7ab16d6c9b2def5093498b81e03427bcb2bcdc1905" exitCode=0 Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.476772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" event={"ID":"0fabef51-0232-441a-ab7f-487c2ef79d04","Type":"ContainerDied","Data":"ef3c10e1127dfea68f724f7ab16d6c9b2def5093498b81e03427bcb2bcdc1905"} Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.516734 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" (UID: "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.516833 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb\") pod \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\" (UID: \"76fa6572-dd30-485f-8d6c-e2c2d96e8bb7\") " Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.517147 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-config" (OuterVolumeSpecName: "config") pod "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" (UID: "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.517488 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.517499 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:02 crc kubenswrapper[4772]: W1122 11:00:02.517572 4772 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7/volumes/kubernetes.io~configmap/ovsdbserver-nb Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.517583 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" (UID: "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.519786 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" (UID: "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.532307 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" event={"ID":"545348ad-f752-4463-96a2-353ba4ac1b57","Type":"ContainerStarted","Data":"0781cee4db5da21096ab2056d1f300ac0635d04141648c3059e28efc3d82f3cf"} Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.538697 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" (UID: "76fa6572-dd30-485f-8d6c-e2c2d96e8bb7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.558293 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" event={"ID":"75820045-4178-4355-926b-6a7f0effd0fc","Type":"ContainerStarted","Data":"176b6886e9065cbcfbf1f7bb309324d66d3dd67e9eea3046a4ce98351804a87c"} Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.558337 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" event={"ID":"75820045-4178-4355-926b-6a7f0effd0fc","Type":"ContainerStarted","Data":"5497eb7d992020e3cd82e3dc8b6d13201255aae14cf2c6757c9e13d4b7b763e0"} Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.622139 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.622590 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.622973 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.936130 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-6lqp8"] Nov 22 11:00:02 crc kubenswrapper[4772]: I1122 11:00:02.956214 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-6lqp8"] Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.008315 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.137858 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n62b4\" (UniqueName: \"kubernetes.io/projected/0fabef51-0232-441a-ab7f-487c2ef79d04-kube-api-access-n62b4\") pod \"0fabef51-0232-441a-ab7f-487c2ef79d04\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.138005 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-svc\") pod \"0fabef51-0232-441a-ab7f-487c2ef79d04\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.138131 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-sb\") pod \"0fabef51-0232-441a-ab7f-487c2ef79d04\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.138168 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-nb\") pod \"0fabef51-0232-441a-ab7f-487c2ef79d04\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.138212 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-swift-storage-0\") pod \"0fabef51-0232-441a-ab7f-487c2ef79d04\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.138255 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-config\") pod \"0fabef51-0232-441a-ab7f-487c2ef79d04\" (UID: \"0fabef51-0232-441a-ab7f-487c2ef79d04\") " Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.143426 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fabef51-0232-441a-ab7f-487c2ef79d04-kube-api-access-n62b4" (OuterVolumeSpecName: "kube-api-access-n62b4") pod "0fabef51-0232-441a-ab7f-487c2ef79d04" (UID: "0fabef51-0232-441a-ab7f-487c2ef79d04"). InnerVolumeSpecName "kube-api-access-n62b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.176876 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0fabef51-0232-441a-ab7f-487c2ef79d04" (UID: "0fabef51-0232-441a-ab7f-487c2ef79d04"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.195611 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0fabef51-0232-441a-ab7f-487c2ef79d04" (UID: "0fabef51-0232-441a-ab7f-487c2ef79d04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.203327 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0fabef51-0232-441a-ab7f-487c2ef79d04" (UID: "0fabef51-0232-441a-ab7f-487c2ef79d04"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.204813 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0fabef51-0232-441a-ab7f-487c2ef79d04" (UID: "0fabef51-0232-441a-ab7f-487c2ef79d04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.225860 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-config" (OuterVolumeSpecName: "config") pod "0fabef51-0232-441a-ab7f-487c2ef79d04" (UID: "0fabef51-0232-441a-ab7f-487c2ef79d04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.247323 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n62b4\" (UniqueName: \"kubernetes.io/projected/0fabef51-0232-441a-ab7f-487c2ef79d04-kube-api-access-n62b4\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.247361 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.247374 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.247385 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.247397 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.247409 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fabef51-0232-441a-ab7f-487c2ef79d04-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.434167 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" path="/var/lib/kubelet/pods/76fa6572-dd30-485f-8d6c-e2c2d96e8bb7/volumes" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.588138 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9577ec77-2954-4ff8-8de2-d965cce60a04","Type":"ContainerStarted","Data":"e389ebe4a07c1f1b25e9d6a0324b338156a0b6b9e44488c6f4da277efc2302ab"} Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.593079 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.609317 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ce6a4db-47b5-4582-9b60-27d1a1485ef1","Type":"ContainerStarted","Data":"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c"} Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.618872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" event={"ID":"0fabef51-0232-441a-ab7f-487c2ef79d04","Type":"ContainerDied","Data":"0e84cbea362a2508102edf728baa61a550dec04be952d6d5e93593820ec5c4ac"} Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.618926 4772 scope.go:117] "RemoveContainer" containerID="ef3c10e1127dfea68f724f7ab16d6c9b2def5093498b81e03427bcb2bcdc1905" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.619028 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-smrdw" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.634036 4772 generic.go:334] "Generic (PLEG): container finished" podID="545348ad-f752-4463-96a2-353ba4ac1b57" containerID="83ff37bc991a4bb6d08b0c04e5b797996448a71a94f218c0477da6634e261586" exitCode=0 Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.634165 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" event={"ID":"545348ad-f752-4463-96a2-353ba4ac1b57","Type":"ContainerDied","Data":"83ff37bc991a4bb6d08b0c04e5b797996448a71a94f218c0477da6634e261586"} Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.641327 4772 generic.go:334] "Generic (PLEG): container finished" podID="75820045-4178-4355-926b-6a7f0effd0fc" containerID="176b6886e9065cbcfbf1f7bb309324d66d3dd67e9eea3046a4ce98351804a87c" exitCode=0 Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.641532 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" event={"ID":"75820045-4178-4355-926b-6a7f0effd0fc","Type":"ContainerDied","Data":"176b6886e9065cbcfbf1f7bb309324d66d3dd67e9eea3046a4ce98351804a87c"} Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.701638 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65d7b679bd-q7h6t" event={"ID":"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd","Type":"ContainerStarted","Data":"6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340"} Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.701683 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65d7b679bd-q7h6t" event={"ID":"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd","Type":"ContainerStarted","Data":"150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857"} Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.702703 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.737346 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-smrdw"] Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.772104 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-smrdw"] Nov 22 11:00:03 crc kubenswrapper[4772]: I1122 11:00:03.786227 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-65d7b679bd-q7h6t" podStartSLOduration=3.786208909 podStartE2EDuration="3.786208909s" podCreationTimestamp="2025-11-22 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:03.748749959 +0000 UTC m=+1323.988194453" watchObservedRunningTime="2025-11-22 11:00:03.786208909 +0000 UTC m=+1324.025653403" Nov 22 11:00:04 crc kubenswrapper[4772]: I1122 11:00:04.729236 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ce6a4db-47b5-4582-9b60-27d1a1485ef1","Type":"ContainerStarted","Data":"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511"} Nov 22 11:00:04 crc kubenswrapper[4772]: I1122 11:00:04.729734 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api-log" containerID="cri-o://2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c" gracePeriod=30 Nov 22 11:00:04 crc kubenswrapper[4772]: I1122 11:00:04.730099 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 11:00:04 crc kubenswrapper[4772]: I1122 11:00:04.730319 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api" containerID="cri-o://889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511" gracePeriod=30 Nov 22 11:00:04 crc kubenswrapper[4772]: I1122 11:00:04.766263 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.76624568 podStartE2EDuration="5.76624568s" podCreationTimestamp="2025-11-22 10:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:04.762112678 +0000 UTC m=+1325.001557172" watchObservedRunningTime="2025-11-22 11:00:04.76624568 +0000 UTC m=+1325.005690174" Nov 22 11:00:04 crc kubenswrapper[4772]: I1122 11:00:04.954702 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.113138 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.201661 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75820045-4178-4355-926b-6a7f0effd0fc-config-volume\") pod \"75820045-4178-4355-926b-6a7f0effd0fc\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.201910 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtzkj\" (UniqueName: \"kubernetes.io/projected/75820045-4178-4355-926b-6a7f0effd0fc-kube-api-access-gtzkj\") pod \"75820045-4178-4355-926b-6a7f0effd0fc\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.201944 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75820045-4178-4355-926b-6a7f0effd0fc-secret-volume\") pod \"75820045-4178-4355-926b-6a7f0effd0fc\" (UID: \"75820045-4178-4355-926b-6a7f0effd0fc\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.202927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75820045-4178-4355-926b-6a7f0effd0fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "75820045-4178-4355-926b-6a7f0effd0fc" (UID: "75820045-4178-4355-926b-6a7f0effd0fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.211647 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75820045-4178-4355-926b-6a7f0effd0fc-kube-api-access-gtzkj" (OuterVolumeSpecName: "kube-api-access-gtzkj") pod "75820045-4178-4355-926b-6a7f0effd0fc" (UID: "75820045-4178-4355-926b-6a7f0effd0fc"). InnerVolumeSpecName "kube-api-access-gtzkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.212939 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75820045-4178-4355-926b-6a7f0effd0fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75820045-4178-4355-926b-6a7f0effd0fc" (UID: "75820045-4178-4355-926b-6a7f0effd0fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.304598 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtzkj\" (UniqueName: \"kubernetes.io/projected/75820045-4178-4355-926b-6a7f0effd0fc-kube-api-access-gtzkj\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.304642 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75820045-4178-4355-926b-6a7f0effd0fc-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.304656 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75820045-4178-4355-926b-6a7f0effd0fc-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.374850 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 11:00:05 crc kubenswrapper[4772]: E1122 11:00:05.375606 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75820045-4178-4355-926b-6a7f0effd0fc" containerName="collect-profiles" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.375625 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="75820045-4178-4355-926b-6a7f0effd0fc" containerName="collect-profiles" Nov 22 11:00:05 crc kubenswrapper[4772]: E1122 11:00:05.375665 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fabef51-0232-441a-ab7f-487c2ef79d04" containerName="init" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.375673 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fabef51-0232-441a-ab7f-487c2ef79d04" containerName="init" Nov 22 11:00:05 crc kubenswrapper[4772]: E1122 11:00:05.375687 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" containerName="init" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.375694 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" containerName="init" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.375940 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fa6572-dd30-485f-8d6c-e2c2d96e8bb7" containerName="init" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.375960 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fabef51-0232-441a-ab7f-487c2ef79d04" containerName="init" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.375973 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="75820045-4178-4355-926b-6a7f0effd0fc" containerName="collect-profiles" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.376736 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.380530 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.380739 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mbgss" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.380913 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.396376 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.462570 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fabef51-0232-441a-ab7f-487c2ef79d04" path="/var/lib/kubelet/pods/0fabef51-0232-441a-ab7f-487c2ef79d04/volumes" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.512179 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lvr\" (UniqueName: \"kubernetes.io/projected/a4e681ba-088a-41b1-9b89-8bac928038e5-kube-api-access-b5lvr\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.512248 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.512279 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config-secret\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.512448 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.614450 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lvr\" (UniqueName: \"kubernetes.io/projected/a4e681ba-088a-41b1-9b89-8bac928038e5-kube-api-access-b5lvr\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.614511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.614531 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config-secret\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.614622 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.615608 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.619437 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.638280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lvr\" (UniqueName: \"kubernetes.io/projected/a4e681ba-088a-41b1-9b89-8bac928038e5-kube-api-access-b5lvr\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.638827 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config-secret\") pod \"openstackclient\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.743120 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.762328 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.775787 4772 generic.go:334] "Generic (PLEG): container finished" podID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerID="889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511" exitCode=0 Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.775821 4772 generic.go:334] "Generic (PLEG): container finished" podID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerID="2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c" exitCode=143 Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.775894 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.775907 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ce6a4db-47b5-4582-9b60-27d1a1485ef1","Type":"ContainerDied","Data":"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511"} Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.775976 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ce6a4db-47b5-4582-9b60-27d1a1485ef1","Type":"ContainerDied","Data":"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c"} Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.775989 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ce6a4db-47b5-4582-9b60-27d1a1485ef1","Type":"ContainerDied","Data":"236c2c30a9b7b8fc6bfe4de9a0b8d30fe7541fcb821f08d6228f325d6be6ac1b"} Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.776002 4772 scope.go:117] "RemoveContainer" containerID="889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.802997 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" event={"ID":"545348ad-f752-4463-96a2-353ba4ac1b57","Type":"ContainerStarted","Data":"537f96f394f4070c28465870acd25ae88cbf9da3ae1de9169af161878a5d031f"} Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.803650 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.821803 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.822467 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf" event={"ID":"75820045-4178-4355-926b-6a7f0effd0fc","Type":"ContainerDied","Data":"5497eb7d992020e3cd82e3dc8b6d13201255aae14cf2c6757c9e13d4b7b763e0"} Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.822572 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5497eb7d992020e3cd82e3dc8b6d13201255aae14cf2c6757c9e13d4b7b763e0" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.823579 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" podStartSLOduration=5.82356115 podStartE2EDuration="5.82356115s" podCreationTimestamp="2025-11-22 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:05.819517149 +0000 UTC m=+1326.058961673" watchObservedRunningTime="2025-11-22 11:00:05.82356115 +0000 UTC m=+1326.063005644" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.838453 4772 scope.go:117] "RemoveContainer" containerID="2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.926040 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fxzq\" (UniqueName: \"kubernetes.io/projected/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-kube-api-access-7fxzq\") pod \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.926197 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-logs\") pod \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.926306 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data\") pod \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.926374 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-etc-machine-id\") pod \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.926464 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-combined-ca-bundle\") pod \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.926650 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data-custom\") pod \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.926722 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-scripts\") pod \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\" (UID: \"5ce6a4db-47b5-4582-9b60-27d1a1485ef1\") " Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.928541 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5ce6a4db-47b5-4582-9b60-27d1a1485ef1" (UID: "5ce6a4db-47b5-4582-9b60-27d1a1485ef1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.934688 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-logs" (OuterVolumeSpecName: "logs") pod "5ce6a4db-47b5-4582-9b60-27d1a1485ef1" (UID: "5ce6a4db-47b5-4582-9b60-27d1a1485ef1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.945941 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-scripts" (OuterVolumeSpecName: "scripts") pod "5ce6a4db-47b5-4582-9b60-27d1a1485ef1" (UID: "5ce6a4db-47b5-4582-9b60-27d1a1485ef1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.959432 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5ce6a4db-47b5-4582-9b60-27d1a1485ef1" (UID: "5ce6a4db-47b5-4582-9b60-27d1a1485ef1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.963389 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-kube-api-access-7fxzq" (OuterVolumeSpecName: "kube-api-access-7fxzq") pod "5ce6a4db-47b5-4582-9b60-27d1a1485ef1" (UID: "5ce6a4db-47b5-4582-9b60-27d1a1485ef1"). InnerVolumeSpecName "kube-api-access-7fxzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.996340 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ce6a4db-47b5-4582-9b60-27d1a1485ef1" (UID: "5ce6a4db-47b5-4582-9b60-27d1a1485ef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.998095 4772 scope.go:117] "RemoveContainer" containerID="889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511" Nov 22 11:00:05 crc kubenswrapper[4772]: E1122 11:00:05.999448 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511\": container with ID starting with 889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511 not found: ID does not exist" containerID="889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.999541 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511"} err="failed to get container status \"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511\": rpc error: code = NotFound desc = could not find container \"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511\": container with ID starting with 889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511 not found: ID does not exist" Nov 22 11:00:05 crc kubenswrapper[4772]: I1122 11:00:05.999654 4772 scope.go:117] "RemoveContainer" containerID="2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c" Nov 22 11:00:06 crc kubenswrapper[4772]: E1122 11:00:05.999993 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c\": container with ID starting with 2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c not found: ID does not exist" containerID="2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.000104 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c"} err="failed to get container status \"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c\": rpc error: code = NotFound desc = could not find container \"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c\": container with ID starting with 2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c not found: ID does not exist" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.000177 4772 scope.go:117] "RemoveContainer" containerID="889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.000399 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511"} err="failed to get container status \"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511\": rpc error: code = NotFound desc = could not find container \"889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511\": container with ID starting with 889d53724ff7f55e8e5f3e618539c0dfa8f6a94e280ee49127fdd351fa9a4511 not found: ID does not exist" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.000471 4772 scope.go:117] "RemoveContainer" containerID="2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.000691 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c"} err="failed to get container status \"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c\": rpc error: code = NotFound desc = could not find container \"2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c\": container with ID starting with 2b76e19e2f3b99c9895b830a92b319207955f6657e63efc6d0a8f8178cb7f96c not found: ID does not exist" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.010451 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data" (OuterVolumeSpecName: "config-data") pod "5ce6a4db-47b5-4582-9b60-27d1a1485ef1" (UID: "5ce6a4db-47b5-4582-9b60-27d1a1485ef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.032700 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.032735 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.032748 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.032760 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.032768 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.032776 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fxzq\" (UniqueName: \"kubernetes.io/projected/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-kube-api-access-7fxzq\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.032786 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce6a4db-47b5-4582-9b60-27d1a1485ef1-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.148653 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.183068 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.197637 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:00:06 crc kubenswrapper[4772]: E1122 11:00:06.198110 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api-log" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.198127 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api-log" Nov 22 11:00:06 crc kubenswrapper[4772]: E1122 11:00:06.198138 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.198144 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.198348 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api-log" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.198372 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" containerName="cinder-api" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.199363 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.201686 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.201881 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.202071 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.208798 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338194 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338312 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338341 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdwh\" (UniqueName: \"kubernetes.io/projected/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-kube-api-access-6zdwh\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338385 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-scripts\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338410 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-logs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338432 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338459 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338479 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data-custom\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.338550 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.349660 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.440587 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.440979 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.441147 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.441175 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdwh\" (UniqueName: \"kubernetes.io/projected/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-kube-api-access-6zdwh\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.441232 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-scripts\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.441310 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-logs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.441339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.441372 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.441394 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data-custom\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.442085 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.444474 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-logs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.450345 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.450435 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data-custom\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.450583 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.451432 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-scripts\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.459871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.466080 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.470206 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdwh\" (UniqueName: \"kubernetes.io/projected/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-kube-api-access-6zdwh\") pod \"cinder-api-0\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.526676 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.669092 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8c79f8b65-qn7q9"] Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.670958 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.675393 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.679677 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.689563 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c79f8b65-qn7q9"] Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.746909 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-config\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.747292 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72s96\" (UniqueName: \"kubernetes.io/projected/865ca651-4e53-4ac9-946d-31c1e485d91d-kube-api-access-72s96\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.747317 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-httpd-config\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.747343 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-combined-ca-bundle\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.747372 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-internal-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.747401 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-public-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.747442 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-ovndb-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.850908 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72s96\" (UniqueName: \"kubernetes.io/projected/865ca651-4e53-4ac9-946d-31c1e485d91d-kube-api-access-72s96\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.850987 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-httpd-config\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.851077 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-combined-ca-bundle\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.851141 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-internal-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.851192 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-public-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.851261 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-ovndb-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.851371 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-config\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.868921 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-ovndb-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.869503 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-public-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.875561 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-config\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.877663 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-internal-tls-certs\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.882684 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-httpd-config\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.887480 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9577ec77-2954-4ff8-8de2-d965cce60a04","Type":"ContainerStarted","Data":"edf287f192fac6a322cb61e002992b92b76c1263b781b148976931d93cd1b8b0"} Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.887548 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-combined-ca-bundle\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.890094 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72s96\" (UniqueName: \"kubernetes.io/projected/865ca651-4e53-4ac9-946d-31c1e485d91d-kube-api-access-72s96\") pod \"neutron-8c79f8b65-qn7q9\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.921003 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.766101653 podStartE2EDuration="7.920984176s" podCreationTimestamp="2025-11-22 10:59:59 +0000 UTC" firstStartedPulling="2025-11-22 11:00:00.863436217 +0000 UTC m=+1321.102880711" lastFinishedPulling="2025-11-22 11:00:02.01831874 +0000 UTC m=+1322.257763234" observedRunningTime="2025-11-22 11:00:06.914420533 +0000 UTC m=+1327.153865027" watchObservedRunningTime="2025-11-22 11:00:06.920984176 +0000 UTC m=+1327.160428670" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.931790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a4e681ba-088a-41b1-9b89-8bac928038e5","Type":"ContainerStarted","Data":"64bec24f572b153df8531886ba9a03f64e4b68cce9b1ba8c4457ff097024b967"} Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.934581 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f797948bc-dk5pr" event={"ID":"027dc32b-06dd-45bf-9aad-8e0c92b44a2b","Type":"ContainerStarted","Data":"e40f637f7b43ff915a3b153426def590c2d29d02ddeac886a688e0d9bf7a29a8"} Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.934684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f797948bc-dk5pr" event={"ID":"027dc32b-06dd-45bf-9aad-8e0c92b44a2b","Type":"ContainerStarted","Data":"deb156a613d4b361e96cb60957d663ef36ea4eb59d8168309e1f3c8cbbf8914f"} Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.942773 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" event={"ID":"1c994b4f-e182-481a-a3ba-17dc9656c70c","Type":"ContainerStarted","Data":"844e023e7c3fccc854525c2a694623fa1a3482bbdd36a977a83ba8eb6cf3ab4b"} Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.942804 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" event={"ID":"1c994b4f-e182-481a-a3ba-17dc9656c70c","Type":"ContainerStarted","Data":"791a1dda016da276f7e60912d835e500d93685bd5cc31d54d2b396d39fcc8af1"} Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.973327 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5f797948bc-dk5pr" podStartSLOduration=3.893297246 podStartE2EDuration="8.973305516s" podCreationTimestamp="2025-11-22 10:59:58 +0000 UTC" firstStartedPulling="2025-11-22 11:00:00.151317701 +0000 UTC m=+1320.390762195" lastFinishedPulling="2025-11-22 11:00:05.231325971 +0000 UTC m=+1325.470770465" observedRunningTime="2025-11-22 11:00:06.958299913 +0000 UTC m=+1327.197744417" watchObservedRunningTime="2025-11-22 11:00:06.973305516 +0000 UTC m=+1327.212750010" Nov 22 11:00:06 crc kubenswrapper[4772]: I1122 11:00:06.990426 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:07 crc kubenswrapper[4772]: I1122 11:00:07.167582 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" podStartSLOduration=3.97197306 podStartE2EDuration="9.167557981s" podCreationTimestamp="2025-11-22 10:59:58 +0000 UTC" firstStartedPulling="2025-11-22 11:00:00.010294388 +0000 UTC m=+1320.249738882" lastFinishedPulling="2025-11-22 11:00:05.205879309 +0000 UTC m=+1325.445323803" observedRunningTime="2025-11-22 11:00:06.987786916 +0000 UTC m=+1327.227231410" watchObservedRunningTime="2025-11-22 11:00:07.167557981 +0000 UTC m=+1327.407002475" Nov 22 11:00:07 crc kubenswrapper[4772]: I1122 11:00:07.173978 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:00:07 crc kubenswrapper[4772]: I1122 11:00:07.428036 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce6a4db-47b5-4582-9b60-27d1a1485ef1" path="/var/lib/kubelet/pods/5ce6a4db-47b5-4582-9b60-27d1a1485ef1/volumes" Nov 22 11:00:07 crc kubenswrapper[4772]: I1122 11:00:07.617034 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c79f8b65-qn7q9"] Nov 22 11:00:07 crc kubenswrapper[4772]: W1122 11:00:07.636167 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod865ca651_4e53_4ac9_946d_31c1e485d91d.slice/crio-f2fea7b5487f2dd96a6855359cfed99ba37dc33f03f66fdb843a16e9d7c69fcc WatchSource:0}: Error finding container f2fea7b5487f2dd96a6855359cfed99ba37dc33f03f66fdb843a16e9d7c69fcc: Status 404 returned error can't find the container with id f2fea7b5487f2dd96a6855359cfed99ba37dc33f03f66fdb843a16e9d7c69fcc Nov 22 11:00:07 crc kubenswrapper[4772]: I1122 11:00:07.962622 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c79f8b65-qn7q9" event={"ID":"865ca651-4e53-4ac9-946d-31c1e485d91d","Type":"ContainerStarted","Data":"f2fea7b5487f2dd96a6855359cfed99ba37dc33f03f66fdb843a16e9d7c69fcc"} Nov 22 11:00:07 crc kubenswrapper[4772]: I1122 11:00:07.966409 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17f0d5ca-99e5-47c6-9fdf-1932956cff3e","Type":"ContainerStarted","Data":"3d270d31d14fce4c62bab4b1f7f271368d062a7d95474934f968e68a8ba5539f"} Nov 22 11:00:08 crc kubenswrapper[4772]: I1122 11:00:08.977617 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17f0d5ca-99e5-47c6-9fdf-1932956cff3e","Type":"ContainerStarted","Data":"41caed95f9f668a055e34288ae91bcce5a6f3ea58f05250f45efb84f1f1c0fbf"} Nov 22 11:00:08 crc kubenswrapper[4772]: I1122 11:00:08.980491 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c79f8b65-qn7q9" event={"ID":"865ca651-4e53-4ac9-946d-31c1e485d91d","Type":"ContainerStarted","Data":"89b92e0a1e681be8f4f78a508d0ebcba29af7864b3c2db95e3d23d573dc85c86"} Nov 22 11:00:08 crc kubenswrapper[4772]: I1122 11:00:08.980530 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c79f8b65-qn7q9" event={"ID":"865ca651-4e53-4ac9-946d-31c1e485d91d","Type":"ContainerStarted","Data":"6bdbd4c4929eabf6a133a2e818bd65ac8febe68d8843b6b4e67d0a024f4e743f"} Nov 22 11:00:08 crc kubenswrapper[4772]: I1122 11:00:08.985075 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.014931 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8c79f8b65-qn7q9" podStartSLOduration=3.014895122 podStartE2EDuration="3.014895122s" podCreationTimestamp="2025-11-22 11:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:09.008646167 +0000 UTC m=+1329.248090661" watchObservedRunningTime="2025-11-22 11:00:09.014895122 +0000 UTC m=+1329.254339616" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.756690 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6876658948-bzr5z"] Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.758434 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.761526 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.762541 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.771727 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6876658948-bzr5z"] Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.851767 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.921373 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data-custom\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.921580 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5xr\" (UniqueName: \"kubernetes.io/projected/86139aa9-cd30-4d97-833e-a26562aebf92-kube-api-access-zt5xr\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.921764 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-internal-tls-certs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.921858 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-public-tls-certs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.921948 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-combined-ca-bundle\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.922008 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86139aa9-cd30-4d97-833e-a26562aebf92-logs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.922090 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.995810 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17f0d5ca-99e5-47c6-9fdf-1932956cff3e","Type":"ContainerStarted","Data":"a687188010e7d1b6b6e71ce02eb4abc2bad75aaad585c817273d9a77d8fbf014"} Nov 22 11:00:09 crc kubenswrapper[4772]: I1122 11:00:09.995871 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.019128 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.019110284 podStartE2EDuration="4.019110284s" podCreationTimestamp="2025-11-22 11:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:10.015013562 +0000 UTC m=+1330.254458076" watchObservedRunningTime="2025-11-22 11:00:10.019110284 +0000 UTC m=+1330.258554778" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.025258 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-internal-tls-certs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.025312 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-public-tls-certs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.025347 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-combined-ca-bundle\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.025375 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86139aa9-cd30-4d97-833e-a26562aebf92-logs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.025395 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.025448 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data-custom\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.025489 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5xr\" (UniqueName: \"kubernetes.io/projected/86139aa9-cd30-4d97-833e-a26562aebf92-kube-api-access-zt5xr\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.027453 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86139aa9-cd30-4d97-833e-a26562aebf92-logs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.033943 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-public-tls-certs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.035653 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.038152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-internal-tls-certs\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.041666 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data-custom\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.045551 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5xr\" (UniqueName: \"kubernetes.io/projected/86139aa9-cd30-4d97-833e-a26562aebf92-kube-api-access-zt5xr\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.059031 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-combined-ca-bundle\") pod \"barbican-api-6876658948-bzr5z\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.103734 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.120517 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.243707 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:00:10 crc kubenswrapper[4772]: I1122 11:00:10.783413 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6876658948-bzr5z"] Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.007214 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.021099 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6876658948-bzr5z" event={"ID":"86139aa9-cd30-4d97-833e-a26562aebf92","Type":"ContainerStarted","Data":"b5adee60fbbe04c2b0f6e677dd915852ef48874363c5ddda10829f388b38decb"} Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.021149 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6876658948-bzr5z" event={"ID":"86139aa9-cd30-4d97-833e-a26562aebf92","Type":"ContainerStarted","Data":"3c987b57d62e3696c05299c2a6bb0caee472f688983f0e5a21b4159de13a7513"} Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.022070 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="cinder-scheduler" containerID="cri-o://e389ebe4a07c1f1b25e9d6a0324b338156a0b6b9e44488c6f4da277efc2302ab" gracePeriod=30 Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.022192 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="probe" containerID="cri-o://edf287f192fac6a322cb61e002992b92b76c1263b781b148976931d93cd1b8b0" gracePeriod=30 Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.110280 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-p75jw"] Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.111040 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" podUID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerName="dnsmasq-dns" containerID="cri-o://bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b" gracePeriod=10 Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.650477 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.763645 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-swift-storage-0\") pod \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.763731 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-svc\") pod \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.763793 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-sb\") pod \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.763812 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mqgc\" (UniqueName: \"kubernetes.io/projected/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-kube-api-access-5mqgc\") pod \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.763840 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-nb\") pod \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.763980 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-config\") pod \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\" (UID: \"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8\") " Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.776182 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-kube-api-access-5mqgc" (OuterVolumeSpecName: "kube-api-access-5mqgc") pod "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" (UID: "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8"). InnerVolumeSpecName "kube-api-access-5mqgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.838131 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" (UID: "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.870443 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.870476 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mqgc\" (UniqueName: \"kubernetes.io/projected/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-kube-api-access-5mqgc\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.872641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" (UID: "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.893951 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-config" (OuterVolumeSpecName: "config") pod "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" (UID: "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.895273 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" (UID: "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.920797 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" (UID: "7afc372e-c3cc-4fea-b62d-e5bfc5750fa8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.973275 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.973307 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.973317 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:11 crc kubenswrapper[4772]: I1122 11:00:11.973326 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.032955 4772 generic.go:334] "Generic (PLEG): container finished" podID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerID="edf287f192fac6a322cb61e002992b92b76c1263b781b148976931d93cd1b8b0" exitCode=0 Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.033060 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9577ec77-2954-4ff8-8de2-d965cce60a04","Type":"ContainerDied","Data":"edf287f192fac6a322cb61e002992b92b76c1263b781b148976931d93cd1b8b0"} Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.038919 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6876658948-bzr5z" event={"ID":"86139aa9-cd30-4d97-833e-a26562aebf92","Type":"ContainerStarted","Data":"a06360ee3022a654f156c3386f22cd5fd488251afc8543f8c37cbc65fc693984"} Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.038952 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.038976 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.041600 4772 generic.go:334] "Generic (PLEG): container finished" podID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerID="bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b" exitCode=0 Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.042220 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.042349 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" event={"ID":"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8","Type":"ContainerDied","Data":"bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b"} Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.042389 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-p75jw" event={"ID":"7afc372e-c3cc-4fea-b62d-e5bfc5750fa8","Type":"ContainerDied","Data":"94e32536d1077af0cb3eebc7c239c91382a135737d51cc2b47d448c05feb7305"} Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.042411 4772 scope.go:117] "RemoveContainer" containerID="bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.085640 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6876658948-bzr5z" podStartSLOduration=3.085618569 podStartE2EDuration="3.085618569s" podCreationTimestamp="2025-11-22 11:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:12.064240568 +0000 UTC m=+1332.303685082" watchObservedRunningTime="2025-11-22 11:00:12.085618569 +0000 UTC m=+1332.325063073" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.113271 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.159324 4772 scope.go:117] "RemoveContainer" containerID="330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.167201 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-p75jw"] Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.178295 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-p75jw"] Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.207632 4772 scope.go:117] "RemoveContainer" containerID="bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b" Nov 22 11:00:12 crc kubenswrapper[4772]: E1122 11:00:12.208718 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b\": container with ID starting with bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b not found: ID does not exist" containerID="bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.208766 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b"} err="failed to get container status \"bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b\": rpc error: code = NotFound desc = could not find container \"bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b\": container with ID starting with bf718a91e8851c0d5cd6e234662b181d080a6179f4c7f66af1736fa4b160ea6b not found: ID does not exist" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.208794 4772 scope.go:117] "RemoveContainer" containerID="330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e" Nov 22 11:00:12 crc kubenswrapper[4772]: E1122 11:00:12.211726 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e\": container with ID starting with 330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e not found: ID does not exist" containerID="330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.211767 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e"} err="failed to get container status \"330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e\": rpc error: code = NotFound desc = could not find container \"330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e\": container with ID starting with 330a1b23775f525ecd702820d14ffbe3083f6bcfc64e60f6c172c8c756bbc84e not found: ID does not exist" Nov 22 11:00:12 crc kubenswrapper[4772]: I1122 11:00:12.426037 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.256169 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-57b6cb6667-w95sj"] Nov 22 11:00:13 crc kubenswrapper[4772]: E1122 11:00:13.256534 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerName="dnsmasq-dns" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.256547 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerName="dnsmasq-dns" Nov 22 11:00:13 crc kubenswrapper[4772]: E1122 11:00:13.256566 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerName="init" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.256572 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerName="init" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.256773 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" containerName="dnsmasq-dns" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.257673 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.263017 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.263440 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.263592 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.287636 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57b6cb6667-w95sj"] Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303516 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-run-httpd\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303595 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxlr4\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-kube-api-access-qxlr4\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303632 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-config-data\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303668 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-log-httpd\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303704 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-public-tls-certs\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303732 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-internal-tls-certs\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303777 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-combined-ca-bundle\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.303796 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-etc-swift\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405489 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-combined-ca-bundle\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405544 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-etc-swift\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405650 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-run-httpd\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405727 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxlr4\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-kube-api-access-qxlr4\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405760 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-config-data\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405796 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-log-httpd\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405831 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-public-tls-certs\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.405871 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-internal-tls-certs\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.407615 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-run-httpd\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.407639 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-log-httpd\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.412940 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-etc-swift\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.417489 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-combined-ca-bundle\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.420360 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-internal-tls-certs\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.425148 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-public-tls-certs\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.432970 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-config-data\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.437432 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxlr4\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-kube-api-access-qxlr4\") pod \"swift-proxy-57b6cb6667-w95sj\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.439215 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afc372e-c3cc-4fea-b62d-e5bfc5750fa8" path="/var/lib/kubelet/pods/7afc372e-c3cc-4fea-b62d-e5bfc5750fa8/volumes" Nov 22 11:00:13 crc kubenswrapper[4772]: I1122 11:00:13.576312 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:14 crc kubenswrapper[4772]: W1122 11:00:14.177908 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13c1f859_42ed_484f_88cb_5349a7b64dda.slice/crio-5bd36ebd6bca70b4df2f24c903f872e6d9abb67e84bec4659a1b85154225ff22 WatchSource:0}: Error finding container 5bd36ebd6bca70b4df2f24c903f872e6d9abb67e84bec4659a1b85154225ff22: Status 404 returned error can't find the container with id 5bd36ebd6bca70b4df2f24c903f872e6d9abb67e84bec4659a1b85154225ff22 Nov 22 11:00:14 crc kubenswrapper[4772]: I1122 11:00:14.178337 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57b6cb6667-w95sj"] Nov 22 11:00:15 crc kubenswrapper[4772]: I1122 11:00:15.077801 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57b6cb6667-w95sj" event={"ID":"13c1f859-42ed-484f-88cb-5349a7b64dda","Type":"ContainerStarted","Data":"dfd79733bb340ac1878b2e319236d06e3fb8878f7900376f4a7fb9aa84b8711a"} Nov 22 11:00:15 crc kubenswrapper[4772]: I1122 11:00:15.078290 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:15 crc kubenswrapper[4772]: I1122 11:00:15.078303 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57b6cb6667-w95sj" event={"ID":"13c1f859-42ed-484f-88cb-5349a7b64dda","Type":"ContainerStarted","Data":"f2df47654803d93eba038dcb4866e8ad0d2e7d308fb39560cb0091e112aadb72"} Nov 22 11:00:15 crc kubenswrapper[4772]: I1122 11:00:15.078313 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57b6cb6667-w95sj" event={"ID":"13c1f859-42ed-484f-88cb-5349a7b64dda","Type":"ContainerStarted","Data":"5bd36ebd6bca70b4df2f24c903f872e6d9abb67e84bec4659a1b85154225ff22"} Nov 22 11:00:15 crc kubenswrapper[4772]: I1122 11:00:15.078327 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:15 crc kubenswrapper[4772]: I1122 11:00:15.102218 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-57b6cb6667-w95sj" podStartSLOduration=2.102191721 podStartE2EDuration="2.102191721s" podCreationTimestamp="2025-11-22 11:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:15.097107605 +0000 UTC m=+1335.336552099" watchObservedRunningTime="2025-11-22 11:00:15.102191721 +0000 UTC m=+1335.341636215" Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.119804 4772 generic.go:334] "Generic (PLEG): container finished" podID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerID="e389ebe4a07c1f1b25e9d6a0324b338156a0b6b9e44488c6f4da277efc2302ab" exitCode=0 Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.123197 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9577ec77-2954-4ff8-8de2-d965cce60a04","Type":"ContainerDied","Data":"e389ebe4a07c1f1b25e9d6a0324b338156a0b6b9e44488c6f4da277efc2302ab"} Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.329512 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.329839 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-central-agent" containerID="cri-o://84353ae768bfbd6529c9bfae429c833e439f2cc6df58b069a29438776ce61917" gracePeriod=30 Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.329926 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="sg-core" containerID="cri-o://581a123fdb71470fd3ebfa1e4ef00ccba778c4cfde04db7c91e1d8c565678271" gracePeriod=30 Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.329949 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-notification-agent" containerID="cri-o://64607a8e83f579d0f89ad96f482a1a7b7ce50a6b4017503ecb43a06b45d494b3" gracePeriod=30 Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.330031 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="proxy-httpd" containerID="cri-o://1d6ce511858533d850829cf51dfa197fbd6da2c32ff765fe832bc228aa3b01cc" gracePeriod=30 Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.345793 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.151:3000/\": EOF" Nov 22 11:00:16 crc kubenswrapper[4772]: I1122 11:00:16.946775 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:17 crc kubenswrapper[4772]: I1122 11:00:17.139616 4772 generic.go:334] "Generic (PLEG): container finished" podID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerID="1d6ce511858533d850829cf51dfa197fbd6da2c32ff765fe832bc228aa3b01cc" exitCode=0 Nov 22 11:00:17 crc kubenswrapper[4772]: I1122 11:00:17.139647 4772 generic.go:334] "Generic (PLEG): container finished" podID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerID="581a123fdb71470fd3ebfa1e4ef00ccba778c4cfde04db7c91e1d8c565678271" exitCode=2 Nov 22 11:00:17 crc kubenswrapper[4772]: I1122 11:00:17.139654 4772 generic.go:334] "Generic (PLEG): container finished" podID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerID="84353ae768bfbd6529c9bfae429c833e439f2cc6df58b069a29438776ce61917" exitCode=0 Nov 22 11:00:17 crc kubenswrapper[4772]: I1122 11:00:17.139674 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerDied","Data":"1d6ce511858533d850829cf51dfa197fbd6da2c32ff765fe832bc228aa3b01cc"} Nov 22 11:00:17 crc kubenswrapper[4772]: I1122 11:00:17.139700 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerDied","Data":"581a123fdb71470fd3ebfa1e4ef00ccba778c4cfde04db7c91e1d8c565678271"} Nov 22 11:00:17 crc kubenswrapper[4772]: I1122 11:00:17.139710 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerDied","Data":"84353ae768bfbd6529c9bfae429c833e439f2cc6df58b069a29438776ce61917"} Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.311950 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jr9dl"] Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.315150 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jr9dl" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.347952 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jr9dl"] Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.421411 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqlj\" (UniqueName: \"kubernetes.io/projected/736d298e-3dc7-460e-a12e-bb29c4364e85-kube-api-access-ttqlj\") pod \"nova-api-db-create-jr9dl\" (UID: \"736d298e-3dc7-460e-a12e-bb29c4364e85\") " pod="openstack/nova-api-db-create-jr9dl" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.526287 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqlj\" (UniqueName: \"kubernetes.io/projected/736d298e-3dc7-460e-a12e-bb29c4364e85-kube-api-access-ttqlj\") pod \"nova-api-db-create-jr9dl\" (UID: \"736d298e-3dc7-460e-a12e-bb29c4364e85\") " pod="openstack/nova-api-db-create-jr9dl" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.562673 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqlj\" (UniqueName: \"kubernetes.io/projected/736d298e-3dc7-460e-a12e-bb29c4364e85-kube-api-access-ttqlj\") pod \"nova-api-db-create-jr9dl\" (UID: \"736d298e-3dc7-460e-a12e-bb29c4364e85\") " pod="openstack/nova-api-db-create-jr9dl" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.630237 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-c2l2l"] Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.631490 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c2l2l" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.645541 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-c2l2l"] Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.645669 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jr9dl" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.727531 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qhm8v"] Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.729302 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qhm8v" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.734159 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlpd\" (UniqueName: \"kubernetes.io/projected/7e3d510f-2f1d-4d21-ae11-55ba98067c9e-kube-api-access-4zlpd\") pod \"nova-cell0-db-create-c2l2l\" (UID: \"7e3d510f-2f1d-4d21-ae11-55ba98067c9e\") " pod="openstack/nova-cell0-db-create-c2l2l" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.736124 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qhm8v"] Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.835827 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zlpd\" (UniqueName: \"kubernetes.io/projected/7e3d510f-2f1d-4d21-ae11-55ba98067c9e-kube-api-access-4zlpd\") pod \"nova-cell0-db-create-c2l2l\" (UID: \"7e3d510f-2f1d-4d21-ae11-55ba98067c9e\") " pod="openstack/nova-cell0-db-create-c2l2l" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.835934 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2669p\" (UniqueName: \"kubernetes.io/projected/8aae7bbd-5de9-46d8-83d6-80f97bed0bf4-kube-api-access-2669p\") pod \"nova-cell1-db-create-qhm8v\" (UID: \"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4\") " pod="openstack/nova-cell1-db-create-qhm8v" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.854019 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zlpd\" (UniqueName: \"kubernetes.io/projected/7e3d510f-2f1d-4d21-ae11-55ba98067c9e-kube-api-access-4zlpd\") pod \"nova-cell0-db-create-c2l2l\" (UID: \"7e3d510f-2f1d-4d21-ae11-55ba98067c9e\") " pod="openstack/nova-cell0-db-create-c2l2l" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.937525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2669p\" (UniqueName: \"kubernetes.io/projected/8aae7bbd-5de9-46d8-83d6-80f97bed0bf4-kube-api-access-2669p\") pod \"nova-cell1-db-create-qhm8v\" (UID: \"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4\") " pod="openstack/nova-cell1-db-create-qhm8v" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.949088 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.961966 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c2l2l" Nov 22 11:00:18 crc kubenswrapper[4772]: I1122 11:00:18.965736 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2669p\" (UniqueName: \"kubernetes.io/projected/8aae7bbd-5de9-46d8-83d6-80f97bed0bf4-kube-api-access-2669p\") pod \"nova-cell1-db-create-qhm8v\" (UID: \"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4\") " pod="openstack/nova-cell1-db-create-qhm8v" Nov 22 11:00:19 crc kubenswrapper[4772]: I1122 11:00:19.046522 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qhm8v" Nov 22 11:00:19 crc kubenswrapper[4772]: I1122 11:00:19.116548 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:00:19 crc kubenswrapper[4772]: I1122 11:00:19.194170 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dccb6fcbd-htjnx"] Nov 22 11:00:19 crc kubenswrapper[4772]: I1122 11:00:19.194400 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dccb6fcbd-htjnx" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api-log" containerID="cri-o://5fc52c3e974bad0e0ae96bb7df63a71487c1b5453308b07ef82a29d625efa8d5" gracePeriod=30 Nov 22 11:00:19 crc kubenswrapper[4772]: I1122 11:00:19.194540 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dccb6fcbd-htjnx" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api" containerID="cri-o://37a9d8a310b935b1d510763c6e8587e3aeae6450c618faa11c5eb2043b5a0031" gracePeriod=30 Nov 22 11:00:20 crc kubenswrapper[4772]: I1122 11:00:20.175649 4772 generic.go:334] "Generic (PLEG): container finished" podID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerID="5fc52c3e974bad0e0ae96bb7df63a71487c1b5453308b07ef82a29d625efa8d5" exitCode=143 Nov 22 11:00:20 crc kubenswrapper[4772]: I1122 11:00:20.175915 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccb6fcbd-htjnx" event={"ID":"1485651d-d1ff-4ef4-88fe-0ab6dd041df4","Type":"ContainerDied","Data":"5fc52c3e974bad0e0ae96bb7df63a71487c1b5453308b07ef82a29d625efa8d5"} Nov 22 11:00:21 crc kubenswrapper[4772]: I1122 11:00:21.192092 4772 generic.go:334] "Generic (PLEG): container finished" podID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerID="64607a8e83f579d0f89ad96f482a1a7b7ce50a6b4017503ecb43a06b45d494b3" exitCode=0 Nov 22 11:00:21 crc kubenswrapper[4772]: I1122 11:00:21.192110 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerDied","Data":"64607a8e83f579d0f89ad96f482a1a7b7ce50a6b4017503ecb43a06b45d494b3"} Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.229855 4772 generic.go:334] "Generic (PLEG): container finished" podID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerID="37a9d8a310b935b1d510763c6e8587e3aeae6450c618faa11c5eb2043b5a0031" exitCode=0 Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.230112 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccb6fcbd-htjnx" event={"ID":"1485651d-d1ff-4ef4-88fe-0ab6dd041df4","Type":"ContainerDied","Data":"37a9d8a310b935b1d510763c6e8587e3aeae6450c618faa11c5eb2043b5a0031"} Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.426366 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.547890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data\") pod \"9577ec77-2954-4ff8-8de2-d965cce60a04\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.548176 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-scripts\") pod \"9577ec77-2954-4ff8-8de2-d965cce60a04\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.548211 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9577ec77-2954-4ff8-8de2-d965cce60a04-etc-machine-id\") pod \"9577ec77-2954-4ff8-8de2-d965cce60a04\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.548345 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-combined-ca-bundle\") pod \"9577ec77-2954-4ff8-8de2-d965cce60a04\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.548416 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data-custom\") pod \"9577ec77-2954-4ff8-8de2-d965cce60a04\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.548458 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t4sw\" (UniqueName: \"kubernetes.io/projected/9577ec77-2954-4ff8-8de2-d965cce60a04-kube-api-access-8t4sw\") pod \"9577ec77-2954-4ff8-8de2-d965cce60a04\" (UID: \"9577ec77-2954-4ff8-8de2-d965cce60a04\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.549583 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9577ec77-2954-4ff8-8de2-d965cce60a04-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9577ec77-2954-4ff8-8de2-d965cce60a04" (UID: "9577ec77-2954-4ff8-8de2-d965cce60a04"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.555134 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.559389 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9577ec77-2954-4ff8-8de2-d965cce60a04-kube-api-access-8t4sw" (OuterVolumeSpecName: "kube-api-access-8t4sw") pod "9577ec77-2954-4ff8-8de2-d965cce60a04" (UID: "9577ec77-2954-4ff8-8de2-d965cce60a04"). InnerVolumeSpecName "kube-api-access-8t4sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.561165 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9577ec77-2954-4ff8-8de2-d965cce60a04" (UID: "9577ec77-2954-4ff8-8de2-d965cce60a04"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.570390 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-scripts" (OuterVolumeSpecName: "scripts") pod "9577ec77-2954-4ff8-8de2-d965cce60a04" (UID: "9577ec77-2954-4ff8-8de2-d965cce60a04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.572760 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.594881 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.597002 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652061 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qnjt\" (UniqueName: \"kubernetes.io/projected/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-kube-api-access-9qnjt\") pod \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652112 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-config-data\") pod \"6b8a1443-6167-4b78-8711-9e0a566004c7\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652183 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-combined-ca-bundle\") pod \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652212 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-log-httpd\") pod \"6b8a1443-6167-4b78-8711-9e0a566004c7\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652353 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-combined-ca-bundle\") pod \"6b8a1443-6167-4b78-8711-9e0a566004c7\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652388 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-sg-core-conf-yaml\") pod \"6b8a1443-6167-4b78-8711-9e0a566004c7\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652428 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhm6\" (UniqueName: \"kubernetes.io/projected/6b8a1443-6167-4b78-8711-9e0a566004c7-kube-api-access-hbhm6\") pod \"6b8a1443-6167-4b78-8711-9e0a566004c7\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652444 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-scripts\") pod \"6b8a1443-6167-4b78-8711-9e0a566004c7\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652482 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-run-httpd\") pod \"6b8a1443-6167-4b78-8711-9e0a566004c7\" (UID: \"6b8a1443-6167-4b78-8711-9e0a566004c7\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652698 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-logs\") pod \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652743 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data\") pod \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.652762 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data-custom\") pod \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\" (UID: \"1485651d-d1ff-4ef4-88fe-0ab6dd041df4\") " Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.653149 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.653163 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t4sw\" (UniqueName: \"kubernetes.io/projected/9577ec77-2954-4ff8-8de2-d965cce60a04-kube-api-access-8t4sw\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.653174 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.653183 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9577ec77-2954-4ff8-8de2-d965cce60a04-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.653363 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b8a1443-6167-4b78-8711-9e0a566004c7" (UID: "6b8a1443-6167-4b78-8711-9e0a566004c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.653540 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b8a1443-6167-4b78-8711-9e0a566004c7" (UID: "6b8a1443-6167-4b78-8711-9e0a566004c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.654025 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-logs" (OuterVolumeSpecName: "logs") pod "1485651d-d1ff-4ef4-88fe-0ab6dd041df4" (UID: "1485651d-d1ff-4ef4-88fe-0ab6dd041df4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.659942 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-kube-api-access-9qnjt" (OuterVolumeSpecName: "kube-api-access-9qnjt") pod "1485651d-d1ff-4ef4-88fe-0ab6dd041df4" (UID: "1485651d-d1ff-4ef4-88fe-0ab6dd041df4"). InnerVolumeSpecName "kube-api-access-9qnjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.663301 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8a1443-6167-4b78-8711-9e0a566004c7-kube-api-access-hbhm6" (OuterVolumeSpecName: "kube-api-access-hbhm6") pod "6b8a1443-6167-4b78-8711-9e0a566004c7" (UID: "6b8a1443-6167-4b78-8711-9e0a566004c7"). InnerVolumeSpecName "kube-api-access-hbhm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.663642 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-scripts" (OuterVolumeSpecName: "scripts") pod "6b8a1443-6167-4b78-8711-9e0a566004c7" (UID: "6b8a1443-6167-4b78-8711-9e0a566004c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.670838 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9577ec77-2954-4ff8-8de2-d965cce60a04" (UID: "9577ec77-2954-4ff8-8de2-d965cce60a04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.671239 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1485651d-d1ff-4ef4-88fe-0ab6dd041df4" (UID: "1485651d-d1ff-4ef4-88fe-0ab6dd041df4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.689590 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jr9dl"] Nov 22 11:00:23 crc kubenswrapper[4772]: W1122 11:00:23.719936 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod736d298e_3dc7_460e_a12e_bb29c4364e85.slice/crio-58c2486511aa4ca2f3f86bb6abfe73f572735dcf89a36f70420c05743ea1ee65 WatchSource:0}: Error finding container 58c2486511aa4ca2f3f86bb6abfe73f572735dcf89a36f70420c05743ea1ee65: Status 404 returned error can't find the container with id 58c2486511aa4ca2f3f86bb6abfe73f572735dcf89a36f70420c05743ea1ee65 Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.734430 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b8a1443-6167-4b78-8711-9e0a566004c7" (UID: "6b8a1443-6167-4b78-8711-9e0a566004c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.748323 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1485651d-d1ff-4ef4-88fe-0ab6dd041df4" (UID: "1485651d-d1ff-4ef4-88fe-0ab6dd041df4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.752671 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data" (OuterVolumeSpecName: "config-data") pod "9577ec77-2954-4ff8-8de2-d965cce60a04" (UID: "9577ec77-2954-4ff8-8de2-d965cce60a04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755196 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755228 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755239 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755250 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755259 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qnjt\" (UniqueName: \"kubernetes.io/projected/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-kube-api-access-9qnjt\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755268 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755276 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b8a1443-6167-4b78-8711-9e0a566004c7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755284 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577ec77-2954-4ff8-8de2-d965cce60a04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755291 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755299 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.755307 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhm6\" (UniqueName: \"kubernetes.io/projected/6b8a1443-6167-4b78-8711-9e0a566004c7-kube-api-access-hbhm6\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.758474 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data" (OuterVolumeSpecName: "config-data") pod "1485651d-d1ff-4ef4-88fe-0ab6dd041df4" (UID: "1485651d-d1ff-4ef4-88fe-0ab6dd041df4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.782770 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-config-data" (OuterVolumeSpecName: "config-data") pod "6b8a1443-6167-4b78-8711-9e0a566004c7" (UID: "6b8a1443-6167-4b78-8711-9e0a566004c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.795984 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b8a1443-6167-4b78-8711-9e0a566004c7" (UID: "6b8a1443-6167-4b78-8711-9e0a566004c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.827634 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qhm8v"] Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.836403 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-c2l2l"] Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.857460 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.857500 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1485651d-d1ff-4ef4-88fe-0ab6dd041df4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:23 crc kubenswrapper[4772]: I1122 11:00:23.857513 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8a1443-6167-4b78-8711-9e0a566004c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.243648 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a4e681ba-088a-41b1-9b89-8bac928038e5","Type":"ContainerStarted","Data":"94c9532e47a3e8f2deba93d357f982767f3bc9fd612be2d3ed8cd1f182488992"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.247368 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jr9dl" event={"ID":"736d298e-3dc7-460e-a12e-bb29c4364e85","Type":"ContainerStarted","Data":"4471a034f975c2eb8a95db8d1c456f43dd6aee03a3d67c268c8b29fe4e53ac0a"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.247409 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jr9dl" event={"ID":"736d298e-3dc7-460e-a12e-bb29c4364e85","Type":"ContainerStarted","Data":"58c2486511aa4ca2f3f86bb6abfe73f572735dcf89a36f70420c05743ea1ee65"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.249248 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qhm8v" event={"ID":"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4","Type":"ContainerStarted","Data":"9210427ecc8309d2dac4e2a1e4343641effc83bde83c8deb3bc9b80d64ac72cf"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.249295 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qhm8v" event={"ID":"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4","Type":"ContainerStarted","Data":"f19da6969460936b775dd3e0146678cd6c8ad3d14467bcb7a66031ece47f2796"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.252858 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b8a1443-6167-4b78-8711-9e0a566004c7","Type":"ContainerDied","Data":"1781b0598eab480bd2372bbd48dfff86da315214aefb2bfd4e6cd8428d7c15bc"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.252907 4772 scope.go:117] "RemoveContainer" containerID="1d6ce511858533d850829cf51dfa197fbd6da2c32ff765fe832bc228aa3b01cc" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.252914 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.267645 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c2l2l" event={"ID":"7e3d510f-2f1d-4d21-ae11-55ba98067c9e","Type":"ContainerStarted","Data":"9711b2f630abd82d5d414ec59de5c0a41437bd4931a17976d02655285045660b"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.267708 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c2l2l" event={"ID":"7e3d510f-2f1d-4d21-ae11-55ba98067c9e","Type":"ContainerStarted","Data":"1e652ef89a4db75805ae3f92d54db540e9174b6a7b09825b66d00b7503c6428b"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.267946 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.462946727 podStartE2EDuration="19.267673132s" podCreationTimestamp="2025-11-22 11:00:05 +0000 UTC" firstStartedPulling="2025-11-22 11:00:06.351646086 +0000 UTC m=+1326.591090580" lastFinishedPulling="2025-11-22 11:00:23.156372491 +0000 UTC m=+1343.395816985" observedRunningTime="2025-11-22 11:00:24.258512935 +0000 UTC m=+1344.497957429" watchObservedRunningTime="2025-11-22 11:00:24.267673132 +0000 UTC m=+1344.507117626" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.270277 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dccb6fcbd-htjnx" event={"ID":"1485651d-d1ff-4ef4-88fe-0ab6dd041df4","Type":"ContainerDied","Data":"72b6bb1796f3b280093f3b9133455e17c3e742ef94cbedc7c831b881f29373bc"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.270393 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dccb6fcbd-htjnx" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.272326 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9577ec77-2954-4ff8-8de2-d965cce60a04","Type":"ContainerDied","Data":"73c391969f98a95e8c3fc079f910500fd6491674eb5890b8af2fa78449fd9b2d"} Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.272376 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.308663 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qhm8v" podStartSLOduration=6.30864126 podStartE2EDuration="6.30864126s" podCreationTimestamp="2025-11-22 11:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:24.29254853 +0000 UTC m=+1344.531993024" watchObservedRunningTime="2025-11-22 11:00:24.30864126 +0000 UTC m=+1344.548085754" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.319679 4772 scope.go:117] "RemoveContainer" containerID="581a123fdb71470fd3ebfa1e4ef00ccba778c4cfde04db7c91e1d8c565678271" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.322683 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-c2l2l" podStartSLOduration=6.322660438 podStartE2EDuration="6.322660438s" podCreationTimestamp="2025-11-22 11:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:24.320107904 +0000 UTC m=+1344.559552398" watchObservedRunningTime="2025-11-22 11:00:24.322660438 +0000 UTC m=+1344.562104942" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.346350 4772 scope.go:117] "RemoveContainer" containerID="64607a8e83f579d0f89ad96f482a1a7b7ce50a6b4017503ecb43a06b45d494b3" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.349323 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.363124 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.373115 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dccb6fcbd-htjnx"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.381985 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dccb6fcbd-htjnx"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.383063 4772 scope.go:117] "RemoveContainer" containerID="84353ae768bfbd6529c9bfae429c833e439f2cc6df58b069a29438776ce61917" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398175 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398757 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398783 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api" Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398799 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="sg-core" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398808 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="sg-core" Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398831 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="probe" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398839 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="probe" Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398866 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-notification-agent" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398874 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-notification-agent" Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398891 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-central-agent" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398899 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-central-agent" Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398913 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="proxy-httpd" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398921 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="proxy-httpd" Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398932 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api-log" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398939 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api-log" Nov 22 11:00:24 crc kubenswrapper[4772]: E1122 11:00:24.398953 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="cinder-scheduler" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.398961 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="cinder-scheduler" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399198 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api-log" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399217 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="probe" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399228 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="proxy-httpd" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399249 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" containerName="barbican-api" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399260 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" containerName="cinder-scheduler" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399274 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="sg-core" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399288 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-notification-agent" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.399297 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" containerName="ceilometer-central-agent" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.400564 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.408558 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.432117 4772 scope.go:117] "RemoveContainer" containerID="37a9d8a310b935b1d510763c6e8587e3aeae6450c618faa11c5eb2043b5a0031" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.437025 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.471128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-scripts\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.471177 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.471197 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n45wg\" (UniqueName: \"kubernetes.io/projected/45d574ce-36bc-461c-a85a-738b71392ed6-kube-api-access-n45wg\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.471271 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45d574ce-36bc-461c-a85a-738b71392ed6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.471292 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.471348 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.477378 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.487698 4772 scope.go:117] "RemoveContainer" containerID="5fc52c3e974bad0e0ae96bb7df63a71487c1b5453308b07ef82a29d625efa8d5" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.505448 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.517372 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.519703 4772 scope.go:117] "RemoveContainer" containerID="edf287f192fac6a322cb61e002992b92b76c1263b781b148976931d93cd1b8b0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.520237 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.522651 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.524244 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.557227 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.573281 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.573400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-scripts\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.573442 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.573477 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n45wg\" (UniqueName: \"kubernetes.io/projected/45d574ce-36bc-461c-a85a-738b71392ed6-kube-api-access-n45wg\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.573584 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45d574ce-36bc-461c-a85a-738b71392ed6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.573640 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.574334 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45d574ce-36bc-461c-a85a-738b71392ed6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.580427 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.580449 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.585005 4772 scope.go:117] "RemoveContainer" containerID="e389ebe4a07c1f1b25e9d6a0324b338156a0b6b9e44488c6f4da277efc2302ab" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.593648 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-scripts\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.595941 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n45wg\" (UniqueName: \"kubernetes.io/projected/45d574ce-36bc-461c-a85a-738b71392ed6-kube-api-access-n45wg\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.598907 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data\") pod \"cinder-scheduler-0\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.675168 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrvmn\" (UniqueName: \"kubernetes.io/projected/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-kube-api-access-mrvmn\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.675246 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-config-data\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.675277 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.675320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.675392 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.675451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.675472 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-scripts\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.748699 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.777291 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-config-data\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.777361 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.777403 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.777457 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.777511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.777536 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-scripts\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.777600 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrvmn\" (UniqueName: \"kubernetes.io/projected/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-kube-api-access-mrvmn\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.778897 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.779003 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.782349 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.782607 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-config-data\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.783182 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-scripts\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.788756 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.795644 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrvmn\" (UniqueName: \"kubernetes.io/projected/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-kube-api-access-mrvmn\") pod \"ceilometer-0\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " pod="openstack/ceilometer-0" Nov 22 11:00:24 crc kubenswrapper[4772]: I1122 11:00:24.986752 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.187581 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.285862 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e3d510f-2f1d-4d21-ae11-55ba98067c9e" containerID="9711b2f630abd82d5d414ec59de5c0a41437bd4931a17976d02655285045660b" exitCode=0 Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.285950 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c2l2l" event={"ID":"7e3d510f-2f1d-4d21-ae11-55ba98067c9e","Type":"ContainerDied","Data":"9711b2f630abd82d5d414ec59de5c0a41437bd4931a17976d02655285045660b"} Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.293494 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"45d574ce-36bc-461c-a85a-738b71392ed6","Type":"ContainerStarted","Data":"a7bd35683aa493abba935fb75f72561e11aecf2eb922bb1c3c4116705ab119ec"} Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.305695 4772 generic.go:334] "Generic (PLEG): container finished" podID="736d298e-3dc7-460e-a12e-bb29c4364e85" containerID="4471a034f975c2eb8a95db8d1c456f43dd6aee03a3d67c268c8b29fe4e53ac0a" exitCode=0 Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.305766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jr9dl" event={"ID":"736d298e-3dc7-460e-a12e-bb29c4364e85","Type":"ContainerDied","Data":"4471a034f975c2eb8a95db8d1c456f43dd6aee03a3d67c268c8b29fe4e53ac0a"} Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.311663 4772 generic.go:334] "Generic (PLEG): container finished" podID="8aae7bbd-5de9-46d8-83d6-80f97bed0bf4" containerID="9210427ecc8309d2dac4e2a1e4343641effc83bde83c8deb3bc9b80d64ac72cf" exitCode=0 Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.311766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qhm8v" event={"ID":"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4","Type":"ContainerDied","Data":"9210427ecc8309d2dac4e2a1e4343641effc83bde83c8deb3bc9b80d64ac72cf"} Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.431121 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1485651d-d1ff-4ef4-88fe-0ab6dd041df4" path="/var/lib/kubelet/pods/1485651d-d1ff-4ef4-88fe-0ab6dd041df4/volumes" Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.431969 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8a1443-6167-4b78-8711-9e0a566004c7" path="/var/lib/kubelet/pods/6b8a1443-6167-4b78-8711-9e0a566004c7/volumes" Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.433017 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9577ec77-2954-4ff8-8de2-d965cce60a04" path="/var/lib/kubelet/pods/9577ec77-2954-4ff8-8de2-d965cce60a04/volumes" Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.492395 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:25 crc kubenswrapper[4772]: W1122 11:00:25.500877 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8a7b5b_7f02_4a34_b793_59b2be1043b7.slice/crio-0b37312c0a63cd1339e45d342b132cd2074450c783dd71f0e02e6b7bd6bc4756 WatchSource:0}: Error finding container 0b37312c0a63cd1339e45d342b132cd2074450c783dd71f0e02e6b7bd6bc4756: Status 404 returned error can't find the container with id 0b37312c0a63cd1339e45d342b132cd2074450c783dd71f0e02e6b7bd6bc4756 Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.651461 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jr9dl" Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.692276 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttqlj\" (UniqueName: \"kubernetes.io/projected/736d298e-3dc7-460e-a12e-bb29c4364e85-kube-api-access-ttqlj\") pod \"736d298e-3dc7-460e-a12e-bb29c4364e85\" (UID: \"736d298e-3dc7-460e-a12e-bb29c4364e85\") " Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.697397 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736d298e-3dc7-460e-a12e-bb29c4364e85-kube-api-access-ttqlj" (OuterVolumeSpecName: "kube-api-access-ttqlj") pod "736d298e-3dc7-460e-a12e-bb29c4364e85" (UID: "736d298e-3dc7-460e-a12e-bb29c4364e85"). InnerVolumeSpecName "kube-api-access-ttqlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:25 crc kubenswrapper[4772]: I1122 11:00:25.795362 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttqlj\" (UniqueName: \"kubernetes.io/projected/736d298e-3dc7-460e-a12e-bb29c4364e85-kube-api-access-ttqlj\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.358795 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jr9dl" event={"ID":"736d298e-3dc7-460e-a12e-bb29c4364e85","Type":"ContainerDied","Data":"58c2486511aa4ca2f3f86bb6abfe73f572735dcf89a36f70420c05743ea1ee65"} Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.359164 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c2486511aa4ca2f3f86bb6abfe73f572735dcf89a36f70420c05743ea1ee65" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.359225 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jr9dl" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.367072 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerStarted","Data":"0b37312c0a63cd1339e45d342b132cd2074450c783dd71f0e02e6b7bd6bc4756"} Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.371199 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"45d574ce-36bc-461c-a85a-738b71392ed6","Type":"ContainerStarted","Data":"89f3af719d3b34aa755006c4c157b86e9e231adc44922aa49a262366c3fbab3e"} Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.731802 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qhm8v" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.768258 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c2l2l" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.818674 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2669p\" (UniqueName: \"kubernetes.io/projected/8aae7bbd-5de9-46d8-83d6-80f97bed0bf4-kube-api-access-2669p\") pod \"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4\" (UID: \"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4\") " Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.818845 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zlpd\" (UniqueName: \"kubernetes.io/projected/7e3d510f-2f1d-4d21-ae11-55ba98067c9e-kube-api-access-4zlpd\") pod \"7e3d510f-2f1d-4d21-ae11-55ba98067c9e\" (UID: \"7e3d510f-2f1d-4d21-ae11-55ba98067c9e\") " Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.825038 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aae7bbd-5de9-46d8-83d6-80f97bed0bf4-kube-api-access-2669p" (OuterVolumeSpecName: "kube-api-access-2669p") pod "8aae7bbd-5de9-46d8-83d6-80f97bed0bf4" (UID: "8aae7bbd-5de9-46d8-83d6-80f97bed0bf4"). InnerVolumeSpecName "kube-api-access-2669p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.825321 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3d510f-2f1d-4d21-ae11-55ba98067c9e-kube-api-access-4zlpd" (OuterVolumeSpecName: "kube-api-access-4zlpd") pod "7e3d510f-2f1d-4d21-ae11-55ba98067c9e" (UID: "7e3d510f-2f1d-4d21-ae11-55ba98067c9e"). InnerVolumeSpecName "kube-api-access-4zlpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.921069 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2669p\" (UniqueName: \"kubernetes.io/projected/8aae7bbd-5de9-46d8-83d6-80f97bed0bf4-kube-api-access-2669p\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:26 crc kubenswrapper[4772]: I1122 11:00:26.921112 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zlpd\" (UniqueName: \"kubernetes.io/projected/7e3d510f-2f1d-4d21-ae11-55ba98067c9e-kube-api-access-4zlpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.386963 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qhm8v" event={"ID":"8aae7bbd-5de9-46d8-83d6-80f97bed0bf4","Type":"ContainerDied","Data":"f19da6969460936b775dd3e0146678cd6c8ad3d14467bcb7a66031ece47f2796"} Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.387330 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f19da6969460936b775dd3e0146678cd6c8ad3d14467bcb7a66031ece47f2796" Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.387487 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qhm8v" Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.388804 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerStarted","Data":"d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792"} Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.388833 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerStarted","Data":"bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633"} Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.391654 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-c2l2l" Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.391702 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-c2l2l" event={"ID":"7e3d510f-2f1d-4d21-ae11-55ba98067c9e","Type":"ContainerDied","Data":"1e652ef89a4db75805ae3f92d54db540e9174b6a7b09825b66d00b7503c6428b"} Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.391753 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e652ef89a4db75805ae3f92d54db540e9174b6a7b09825b66d00b7503c6428b" Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.405987 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"45d574ce-36bc-461c-a85a-738b71392ed6","Type":"ContainerStarted","Data":"508aa44be1af6ce7429f5cfe8151bfa33738cc58c627ec663c79b706c344ddb4"} Nov 22 11:00:27 crc kubenswrapper[4772]: I1122 11:00:27.433087 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.43307128 podStartE2EDuration="3.43307128s" podCreationTimestamp="2025-11-22 11:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:00:27.43022215 +0000 UTC m=+1347.669666644" watchObservedRunningTime="2025-11-22 11:00:27.43307128 +0000 UTC m=+1347.672515774" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.418519 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerStarted","Data":"16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c"} Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.499024 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0c20-account-create-g4m74"] Nov 22 11:00:28 crc kubenswrapper[4772]: E1122 11:00:28.499566 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3d510f-2f1d-4d21-ae11-55ba98067c9e" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.499590 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3d510f-2f1d-4d21-ae11-55ba98067c9e" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: E1122 11:00:28.499631 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aae7bbd-5de9-46d8-83d6-80f97bed0bf4" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.499640 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aae7bbd-5de9-46d8-83d6-80f97bed0bf4" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: E1122 11:00:28.499661 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736d298e-3dc7-460e-a12e-bb29c4364e85" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.499669 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="736d298e-3dc7-460e-a12e-bb29c4364e85" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.499906 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="736d298e-3dc7-460e-a12e-bb29c4364e85" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.499932 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3d510f-2f1d-4d21-ae11-55ba98067c9e" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.499953 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aae7bbd-5de9-46d8-83d6-80f97bed0bf4" containerName="mariadb-database-create" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.500701 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c20-account-create-g4m74" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.507896 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c20-account-create-g4m74"] Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.519610 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.562932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghpn9\" (UniqueName: \"kubernetes.io/projected/107a304c-250a-4df8-8a5e-5ffc7449cdc6-kube-api-access-ghpn9\") pod \"nova-api-0c20-account-create-g4m74\" (UID: \"107a304c-250a-4df8-8a5e-5ffc7449cdc6\") " pod="openstack/nova-api-0c20-account-create-g4m74" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.665464 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghpn9\" (UniqueName: \"kubernetes.io/projected/107a304c-250a-4df8-8a5e-5ffc7449cdc6-kube-api-access-ghpn9\") pod \"nova-api-0c20-account-create-g4m74\" (UID: \"107a304c-250a-4df8-8a5e-5ffc7449cdc6\") " pod="openstack/nova-api-0c20-account-create-g4m74" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.689811 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghpn9\" (UniqueName: \"kubernetes.io/projected/107a304c-250a-4df8-8a5e-5ffc7449cdc6-kube-api-access-ghpn9\") pod \"nova-api-0c20-account-create-g4m74\" (UID: \"107a304c-250a-4df8-8a5e-5ffc7449cdc6\") " pod="openstack/nova-api-0c20-account-create-g4m74" Nov 22 11:00:28 crc kubenswrapper[4772]: I1122 11:00:28.831685 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c20-account-create-g4m74" Nov 22 11:00:29 crc kubenswrapper[4772]: I1122 11:00:29.307851 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c20-account-create-g4m74"] Nov 22 11:00:29 crc kubenswrapper[4772]: I1122 11:00:29.446339 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c20-account-create-g4m74" event={"ID":"107a304c-250a-4df8-8a5e-5ffc7449cdc6","Type":"ContainerStarted","Data":"f08b0e01e02b8469fe0a222a84a8153bdb48e34690b115d11375395dd1a93ca2"} Nov 22 11:00:29 crc kubenswrapper[4772]: I1122 11:00:29.749505 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 11:00:30 crc kubenswrapper[4772]: I1122 11:00:30.467128 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerStarted","Data":"4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2"} Nov 22 11:00:30 crc kubenswrapper[4772]: I1122 11:00:30.468312 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 11:00:30 crc kubenswrapper[4772]: I1122 11:00:30.469256 4772 generic.go:334] "Generic (PLEG): container finished" podID="107a304c-250a-4df8-8a5e-5ffc7449cdc6" containerID="5d736ec35ae6a0500e6d88c0ff90627bd4515397189cf9c8910f73ee956ed76f" exitCode=0 Nov 22 11:00:30 crc kubenswrapper[4772]: I1122 11:00:30.469292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c20-account-create-g4m74" event={"ID":"107a304c-250a-4df8-8a5e-5ffc7449cdc6","Type":"ContainerDied","Data":"5d736ec35ae6a0500e6d88c0ff90627bd4515397189cf9c8910f73ee956ed76f"} Nov 22 11:00:30 crc kubenswrapper[4772]: I1122 11:00:30.526546 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.739648038 podStartE2EDuration="6.526505261s" podCreationTimestamp="2025-11-22 11:00:24 +0000 UTC" firstStartedPulling="2025-11-22 11:00:25.506542982 +0000 UTC m=+1345.745987476" lastFinishedPulling="2025-11-22 11:00:29.293400195 +0000 UTC m=+1349.532844699" observedRunningTime="2025-11-22 11:00:30.50593438 +0000 UTC m=+1350.745378894" watchObservedRunningTime="2025-11-22 11:00:30.526505261 +0000 UTC m=+1350.765949755" Nov 22 11:00:31 crc kubenswrapper[4772]: I1122 11:00:31.064082 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:31 crc kubenswrapper[4772]: I1122 11:00:31.430248 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:31 crc kubenswrapper[4772]: I1122 11:00:31.532977 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:00:31 crc kubenswrapper[4772]: I1122 11:00:31.533060 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:00:31 crc kubenswrapper[4772]: I1122 11:00:31.817589 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c20-account-create-g4m74" Nov 22 11:00:31 crc kubenswrapper[4772]: I1122 11:00:31.935615 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghpn9\" (UniqueName: \"kubernetes.io/projected/107a304c-250a-4df8-8a5e-5ffc7449cdc6-kube-api-access-ghpn9\") pod \"107a304c-250a-4df8-8a5e-5ffc7449cdc6\" (UID: \"107a304c-250a-4df8-8a5e-5ffc7449cdc6\") " Nov 22 11:00:31 crc kubenswrapper[4772]: I1122 11:00:31.942305 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107a304c-250a-4df8-8a5e-5ffc7449cdc6-kube-api-access-ghpn9" (OuterVolumeSpecName: "kube-api-access-ghpn9") pod "107a304c-250a-4df8-8a5e-5ffc7449cdc6" (UID: "107a304c-250a-4df8-8a5e-5ffc7449cdc6"). InnerVolumeSpecName "kube-api-access-ghpn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.038178 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghpn9\" (UniqueName: \"kubernetes.io/projected/107a304c-250a-4df8-8a5e-5ffc7449cdc6-kube-api-access-ghpn9\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.488613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c20-account-create-g4m74" event={"ID":"107a304c-250a-4df8-8a5e-5ffc7449cdc6","Type":"ContainerDied","Data":"f08b0e01e02b8469fe0a222a84a8153bdb48e34690b115d11375395dd1a93ca2"} Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.488877 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="proxy-httpd" containerID="cri-o://4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2" gracePeriod=30 Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.488800 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-central-agent" containerID="cri-o://bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633" gracePeriod=30 Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.488842 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-notification-agent" containerID="cri-o://d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792" gracePeriod=30 Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.488845 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="sg-core" containerID="cri-o://16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c" gracePeriod=30 Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.488886 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f08b0e01e02b8469fe0a222a84a8153bdb48e34690b115d11375395dd1a93ca2" Nov 22 11:00:32 crc kubenswrapper[4772]: I1122 11:00:32.488635 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c20-account-create-g4m74" Nov 22 11:00:33 crc kubenswrapper[4772]: I1122 11:00:33.506713 4772 generic.go:334] "Generic (PLEG): container finished" podID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerID="4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2" exitCode=0 Nov 22 11:00:33 crc kubenswrapper[4772]: I1122 11:00:33.507146 4772 generic.go:334] "Generic (PLEG): container finished" podID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerID="16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c" exitCode=2 Nov 22 11:00:33 crc kubenswrapper[4772]: I1122 11:00:33.507162 4772 generic.go:334] "Generic (PLEG): container finished" podID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerID="d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792" exitCode=0 Nov 22 11:00:33 crc kubenswrapper[4772]: I1122 11:00:33.507249 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerDied","Data":"4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2"} Nov 22 11:00:33 crc kubenswrapper[4772]: I1122 11:00:33.507287 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerDied","Data":"16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c"} Nov 22 11:00:33 crc kubenswrapper[4772]: I1122 11:00:33.507342 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerDied","Data":"d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792"} Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.182373 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.284904 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrvmn\" (UniqueName: \"kubernetes.io/projected/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-kube-api-access-mrvmn\") pod \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.284954 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-run-httpd\") pod \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.284983 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-log-httpd\") pod \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.285062 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-scripts\") pod \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.285126 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-sg-core-conf-yaml\") pod \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.285183 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-combined-ca-bundle\") pod \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.285274 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-config-data\") pod \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\" (UID: \"8c8a7b5b-7f02-4a34-b793-59b2be1043b7\") " Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.286106 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c8a7b5b-7f02-4a34-b793-59b2be1043b7" (UID: "8c8a7b5b-7f02-4a34-b793-59b2be1043b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.286780 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c8a7b5b-7f02-4a34-b793-59b2be1043b7" (UID: "8c8a7b5b-7f02-4a34-b793-59b2be1043b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.292194 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-scripts" (OuterVolumeSpecName: "scripts") pod "8c8a7b5b-7f02-4a34-b793-59b2be1043b7" (UID: "8c8a7b5b-7f02-4a34-b793-59b2be1043b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.292243 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-kube-api-access-mrvmn" (OuterVolumeSpecName: "kube-api-access-mrvmn") pod "8c8a7b5b-7f02-4a34-b793-59b2be1043b7" (UID: "8c8a7b5b-7f02-4a34-b793-59b2be1043b7"). InnerVolumeSpecName "kube-api-access-mrvmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.316191 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c8a7b5b-7f02-4a34-b793-59b2be1043b7" (UID: "8c8a7b5b-7f02-4a34-b793-59b2be1043b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.378235 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c8a7b5b-7f02-4a34-b793-59b2be1043b7" (UID: "8c8a7b5b-7f02-4a34-b793-59b2be1043b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.388022 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrvmn\" (UniqueName: \"kubernetes.io/projected/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-kube-api-access-mrvmn\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.388083 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.388096 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.388111 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.388125 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.388136 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.415492 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-config-data" (OuterVolumeSpecName: "config-data") pod "8c8a7b5b-7f02-4a34-b793-59b2be1043b7" (UID: "8c8a7b5b-7f02-4a34-b793-59b2be1043b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.490038 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8a7b5b-7f02-4a34-b793-59b2be1043b7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.520203 4772 generic.go:334] "Generic (PLEG): container finished" podID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerID="bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633" exitCode=0 Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.520248 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerDied","Data":"bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633"} Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.520279 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8a7b5b-7f02-4a34-b793-59b2be1043b7","Type":"ContainerDied","Data":"0b37312c0a63cd1339e45d342b132cd2074450c783dd71f0e02e6b7bd6bc4756"} Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.520297 4772 scope.go:117] "RemoveContainer" containerID="4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.520428 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.550716 4772 scope.go:117] "RemoveContainer" containerID="16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.554755 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.564252 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.584709 4772 scope.go:117] "RemoveContainer" containerID="d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.596931 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.597796 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107a304c-250a-4df8-8a5e-5ffc7449cdc6" containerName="mariadb-account-create" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.597814 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="107a304c-250a-4df8-8a5e-5ffc7449cdc6" containerName="mariadb-account-create" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.597830 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="proxy-httpd" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.597837 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="proxy-httpd" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.597863 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-notification-agent" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.597873 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-notification-agent" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.597889 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-central-agent" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.597895 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-central-agent" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.597917 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="sg-core" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.597923 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="sg-core" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.607003 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-central-agent" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.607078 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="sg-core" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.607099 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="ceilometer-notification-agent" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.607109 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" containerName="proxy-httpd" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.607141 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="107a304c-250a-4df8-8a5e-5ffc7449cdc6" containerName="mariadb-account-create" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.610973 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.630686 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.630827 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.644223 4772 scope.go:117] "RemoveContainer" containerID="bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.662058 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.691402 4772 scope.go:117] "RemoveContainer" containerID="4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.698668 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2\": container with ID starting with 4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2 not found: ID does not exist" containerID="4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.698966 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2"} err="failed to get container status \"4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2\": rpc error: code = NotFound desc = could not find container \"4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2\": container with ID starting with 4eada0deed9b4f34a5699c417197e4a8e939f5d97752e80de83318ecb83440f2 not found: ID does not exist" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.699123 4772 scope.go:117] "RemoveContainer" containerID="16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.699849 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c\": container with ID starting with 16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c not found: ID does not exist" containerID="16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.699901 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c"} err="failed to get container status \"16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c\": rpc error: code = NotFound desc = could not find container \"16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c\": container with ID starting with 16e1264a9437586478f7a84a106b506c1b49de06591cebebc061cc569683c35c not found: ID does not exist" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.701679 4772 scope.go:117] "RemoveContainer" containerID="d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.702821 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792\": container with ID starting with d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792 not found: ID does not exist" containerID="d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.702933 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792"} err="failed to get container status \"d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792\": rpc error: code = NotFound desc = could not find container \"d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792\": container with ID starting with d98d5458f2fe6f0a6caa97b416b145940d5fd4058276c78a86592ea185e7f792 not found: ID does not exist" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.703025 4772 scope.go:117] "RemoveContainer" containerID="bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633" Nov 22 11:00:34 crc kubenswrapper[4772]: E1122 11:00:34.705633 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633\": container with ID starting with bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633 not found: ID does not exist" containerID="bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.705803 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633"} err="failed to get container status \"bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633\": rpc error: code = NotFound desc = could not find container \"bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633\": container with ID starting with bc54cf70ca4edc6e26f79e9c7eceee8ff2a178ea56a5ebe22aee034048d03633 not found: ID does not exist" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.745399 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.745712 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-scripts\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.745996 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzgr\" (UniqueName: \"kubernetes.io/projected/17871b70-2c40-4198-bff5-51dd45433e3c-kube-api-access-4bzgr\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.747095 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-config-data\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.747410 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-run-httpd\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.747617 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-log-httpd\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.747858 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.849339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzgr\" (UniqueName: \"kubernetes.io/projected/17871b70-2c40-4198-bff5-51dd45433e3c-kube-api-access-4bzgr\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.850110 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-config-data\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.850899 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-run-httpd\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.851003 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-log-httpd\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.851144 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.851281 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.851411 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-scripts\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.851448 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-run-httpd\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.852064 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-log-httpd\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.856681 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.857344 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-scripts\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.858033 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.859217 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-config-data\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.871243 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzgr\" (UniqueName: \"kubernetes.io/projected/17871b70-2c40-4198-bff5-51dd45433e3c-kube-api-access-4bzgr\") pod \"ceilometer-0\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.956372 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:34 crc kubenswrapper[4772]: I1122 11:00:34.977212 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 11:00:35 crc kubenswrapper[4772]: I1122 11:00:35.441573 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8a7b5b-7f02-4a34-b793-59b2be1043b7" path="/var/lib/kubelet/pods/8c8a7b5b-7f02-4a34-b793-59b2be1043b7/volumes" Nov 22 11:00:35 crc kubenswrapper[4772]: I1122 11:00:35.488036 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:35 crc kubenswrapper[4772]: I1122 11:00:35.531686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerStarted","Data":"adec81dd7ec7ff4abdaf41327c70205aae7c88237d0cb10f5ecbc7142085bce2"} Nov 22 11:00:35 crc kubenswrapper[4772]: I1122 11:00:35.621983 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:36 crc kubenswrapper[4772]: I1122 11:00:36.546537 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerStarted","Data":"2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd"} Nov 22 11:00:37 crc kubenswrapper[4772]: I1122 11:00:37.014065 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:00:37 crc kubenswrapper[4772]: I1122 11:00:37.085447 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65d7b679bd-q7h6t"] Nov 22 11:00:37 crc kubenswrapper[4772]: I1122 11:00:37.085692 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65d7b679bd-q7h6t" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-api" containerID="cri-o://150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857" gracePeriod=30 Nov 22 11:00:37 crc kubenswrapper[4772]: I1122 11:00:37.086115 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65d7b679bd-q7h6t" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-httpd" containerID="cri-o://6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340" gracePeriod=30 Nov 22 11:00:37 crc kubenswrapper[4772]: I1122 11:00:37.562457 4772 generic.go:334] "Generic (PLEG): container finished" podID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerID="6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340" exitCode=0 Nov 22 11:00:37 crc kubenswrapper[4772]: I1122 11:00:37.562747 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65d7b679bd-q7h6t" event={"ID":"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd","Type":"ContainerDied","Data":"6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340"} Nov 22 11:00:37 crc kubenswrapper[4772]: I1122 11:00:37.574450 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerStarted","Data":"4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951"} Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.585759 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerStarted","Data":"8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2"} Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.691493 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5903-account-create-hhqx8"] Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.693257 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5903-account-create-hhqx8" Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.695850 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.705375 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5903-account-create-hhqx8"] Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.834461 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js8jd\" (UniqueName: \"kubernetes.io/projected/849e3f53-4dc6-4e20-aa04-0e2a0ae14427-kube-api-access-js8jd\") pod \"nova-cell0-5903-account-create-hhqx8\" (UID: \"849e3f53-4dc6-4e20-aa04-0e2a0ae14427\") " pod="openstack/nova-cell0-5903-account-create-hhqx8" Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.891509 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-01a0-account-create-4cpjc"] Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.892697 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01a0-account-create-4cpjc" Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.895140 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.902264 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-01a0-account-create-4cpjc"] Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.936000 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js8jd\" (UniqueName: \"kubernetes.io/projected/849e3f53-4dc6-4e20-aa04-0e2a0ae14427-kube-api-access-js8jd\") pod \"nova-cell0-5903-account-create-hhqx8\" (UID: \"849e3f53-4dc6-4e20-aa04-0e2a0ae14427\") " pod="openstack/nova-cell0-5903-account-create-hhqx8" Nov 22 11:00:38 crc kubenswrapper[4772]: I1122 11:00:38.965769 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js8jd\" (UniqueName: \"kubernetes.io/projected/849e3f53-4dc6-4e20-aa04-0e2a0ae14427-kube-api-access-js8jd\") pod \"nova-cell0-5903-account-create-hhqx8\" (UID: \"849e3f53-4dc6-4e20-aa04-0e2a0ae14427\") " pod="openstack/nova-cell0-5903-account-create-hhqx8" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.038090 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96mfd\" (UniqueName: \"kubernetes.io/projected/aaa05e69-3162-4e89-925c-dd99d6a35bba-kube-api-access-96mfd\") pod \"nova-cell1-01a0-account-create-4cpjc\" (UID: \"aaa05e69-3162-4e89-925c-dd99d6a35bba\") " pod="openstack/nova-cell1-01a0-account-create-4cpjc" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.117385 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5903-account-create-hhqx8" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.139899 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96mfd\" (UniqueName: \"kubernetes.io/projected/aaa05e69-3162-4e89-925c-dd99d6a35bba-kube-api-access-96mfd\") pod \"nova-cell1-01a0-account-create-4cpjc\" (UID: \"aaa05e69-3162-4e89-925c-dd99d6a35bba\") " pod="openstack/nova-cell1-01a0-account-create-4cpjc" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.175121 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96mfd\" (UniqueName: \"kubernetes.io/projected/aaa05e69-3162-4e89-925c-dd99d6a35bba-kube-api-access-96mfd\") pod \"nova-cell1-01a0-account-create-4cpjc\" (UID: \"aaa05e69-3162-4e89-925c-dd99d6a35bba\") " pod="openstack/nova-cell1-01a0-account-create-4cpjc" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.212207 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01a0-account-create-4cpjc" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.599639 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerStarted","Data":"5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744"} Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.600150 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-central-agent" containerID="cri-o://2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd" gracePeriod=30 Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.600288 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.600674 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="proxy-httpd" containerID="cri-o://5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744" gracePeriod=30 Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.600740 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="sg-core" containerID="cri-o://8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2" gracePeriod=30 Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.600892 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-notification-agent" containerID="cri-o://4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951" gracePeriod=30 Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.634076 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.50551914 podStartE2EDuration="5.634039712s" podCreationTimestamp="2025-11-22 11:00:34 +0000 UTC" firstStartedPulling="2025-11-22 11:00:35.481713392 +0000 UTC m=+1355.721157876" lastFinishedPulling="2025-11-22 11:00:38.610233954 +0000 UTC m=+1358.849678448" observedRunningTime="2025-11-22 11:00:39.632607267 +0000 UTC m=+1359.872051761" watchObservedRunningTime="2025-11-22 11:00:39.634039712 +0000 UTC m=+1359.873484206" Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.762751 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5903-account-create-hhqx8"] Nov 22 11:00:39 crc kubenswrapper[4772]: I1122 11:00:39.847799 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-01a0-account-create-4cpjc"] Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.628267 4772 generic.go:334] "Generic (PLEG): container finished" podID="849e3f53-4dc6-4e20-aa04-0e2a0ae14427" containerID="c79cb770e1e50bba1ffd90d856560b25af0b738c945cfa34cbf164d07bd32f5a" exitCode=0 Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.628620 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5903-account-create-hhqx8" event={"ID":"849e3f53-4dc6-4e20-aa04-0e2a0ae14427","Type":"ContainerDied","Data":"c79cb770e1e50bba1ffd90d856560b25af0b738c945cfa34cbf164d07bd32f5a"} Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.628649 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5903-account-create-hhqx8" event={"ID":"849e3f53-4dc6-4e20-aa04-0e2a0ae14427","Type":"ContainerStarted","Data":"16fc1e86fb0689ce67e500a13aaa892891d3d5ac4ab143d5d286bfceaea623ab"} Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.631198 4772 generic.go:334] "Generic (PLEG): container finished" podID="aaa05e69-3162-4e89-925c-dd99d6a35bba" containerID="a05e82ad97693943b66d110b04e16e64eff2aeb42ec422477aa310a4d5d06e23" exitCode=0 Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.631261 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-01a0-account-create-4cpjc" event={"ID":"aaa05e69-3162-4e89-925c-dd99d6a35bba","Type":"ContainerDied","Data":"a05e82ad97693943b66d110b04e16e64eff2aeb42ec422477aa310a4d5d06e23"} Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.631540 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-01a0-account-create-4cpjc" event={"ID":"aaa05e69-3162-4e89-925c-dd99d6a35bba","Type":"ContainerStarted","Data":"9e2b16b8cd8e7c1552c42b890e6dbef7bd469d80a81406d3ce2b92faf511dbbd"} Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.633971 4772 generic.go:334] "Generic (PLEG): container finished" podID="17871b70-2c40-4198-bff5-51dd45433e3c" containerID="5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744" exitCode=0 Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.634001 4772 generic.go:334] "Generic (PLEG): container finished" podID="17871b70-2c40-4198-bff5-51dd45433e3c" containerID="8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2" exitCode=2 Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.634011 4772 generic.go:334] "Generic (PLEG): container finished" podID="17871b70-2c40-4198-bff5-51dd45433e3c" containerID="4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951" exitCode=0 Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.634031 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerDied","Data":"5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744"} Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.634050 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerDied","Data":"8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2"} Nov 22 11:00:40 crc kubenswrapper[4772]: I1122 11:00:40.634065 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerDied","Data":"4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951"} Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.115950 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5903-account-create-hhqx8" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.119947 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01a0-account-create-4cpjc" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.253382 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.258742 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96mfd\" (UniqueName: \"kubernetes.io/projected/aaa05e69-3162-4e89-925c-dd99d6a35bba-kube-api-access-96mfd\") pod \"aaa05e69-3162-4e89-925c-dd99d6a35bba\" (UID: \"aaa05e69-3162-4e89-925c-dd99d6a35bba\") " Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.261770 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js8jd\" (UniqueName: \"kubernetes.io/projected/849e3f53-4dc6-4e20-aa04-0e2a0ae14427-kube-api-access-js8jd\") pod \"849e3f53-4dc6-4e20-aa04-0e2a0ae14427\" (UID: \"849e3f53-4dc6-4e20-aa04-0e2a0ae14427\") " Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.265574 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa05e69-3162-4e89-925c-dd99d6a35bba-kube-api-access-96mfd" (OuterVolumeSpecName: "kube-api-access-96mfd") pod "aaa05e69-3162-4e89-925c-dd99d6a35bba" (UID: "aaa05e69-3162-4e89-925c-dd99d6a35bba"). InnerVolumeSpecName "kube-api-access-96mfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.267113 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849e3f53-4dc6-4e20-aa04-0e2a0ae14427-kube-api-access-js8jd" (OuterVolumeSpecName: "kube-api-access-js8jd") pod "849e3f53-4dc6-4e20-aa04-0e2a0ae14427" (UID: "849e3f53-4dc6-4e20-aa04-0e2a0ae14427"). InnerVolumeSpecName "kube-api-access-js8jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.365902 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-combined-ca-bundle\") pod \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.365978 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-config\") pod \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.366094 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-ovndb-tls-certs\") pod \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.366137 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-httpd-config\") pod \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.366186 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5qk4\" (UniqueName: \"kubernetes.io/projected/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-kube-api-access-s5qk4\") pod \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\" (UID: \"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd\") " Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.366545 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js8jd\" (UniqueName: \"kubernetes.io/projected/849e3f53-4dc6-4e20-aa04-0e2a0ae14427-kube-api-access-js8jd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.366561 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96mfd\" (UniqueName: \"kubernetes.io/projected/aaa05e69-3162-4e89-925c-dd99d6a35bba-kube-api-access-96mfd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.370321 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-kube-api-access-s5qk4" (OuterVolumeSpecName: "kube-api-access-s5qk4") pod "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" (UID: "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd"). InnerVolumeSpecName "kube-api-access-s5qk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.371830 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" (UID: "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.418296 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-config" (OuterVolumeSpecName: "config") pod "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" (UID: "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.423620 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" (UID: "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.449199 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" (UID: "d3ece6c8-49c6-468f-b579-6bd9b2bea8bd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.468440 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.468475 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.468485 4772 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.468493 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.468501 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5qk4\" (UniqueName: \"kubernetes.io/projected/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd-kube-api-access-s5qk4\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.652431 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-01a0-account-create-4cpjc" event={"ID":"aaa05e69-3162-4e89-925c-dd99d6a35bba","Type":"ContainerDied","Data":"9e2b16b8cd8e7c1552c42b890e6dbef7bd469d80a81406d3ce2b92faf511dbbd"} Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.652476 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2b16b8cd8e7c1552c42b890e6dbef7bd469d80a81406d3ce2b92faf511dbbd" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.652534 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01a0-account-create-4cpjc" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.656179 4772 generic.go:334] "Generic (PLEG): container finished" podID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerID="150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857" exitCode=0 Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.656242 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65d7b679bd-q7h6t" event={"ID":"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd","Type":"ContainerDied","Data":"150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857"} Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.656272 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65d7b679bd-q7h6t" event={"ID":"d3ece6c8-49c6-468f-b579-6bd9b2bea8bd","Type":"ContainerDied","Data":"3f3da895623f9b3cef43c42cb364109c6134a2579869fc1cf0d63581b7931cf2"} Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.656288 4772 scope.go:117] "RemoveContainer" containerID="6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.656405 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65d7b679bd-q7h6t" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.662033 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5903-account-create-hhqx8" event={"ID":"849e3f53-4dc6-4e20-aa04-0e2a0ae14427","Type":"ContainerDied","Data":"16fc1e86fb0689ce67e500a13aaa892891d3d5ac4ab143d5d286bfceaea623ab"} Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.662103 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16fc1e86fb0689ce67e500a13aaa892891d3d5ac4ab143d5d286bfceaea623ab" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.662189 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5903-account-create-hhqx8" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.698099 4772 scope.go:117] "RemoveContainer" containerID="150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.705396 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65d7b679bd-q7h6t"] Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.716460 4772 scope.go:117] "RemoveContainer" containerID="6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340" Nov 22 11:00:42 crc kubenswrapper[4772]: E1122 11:00:42.717700 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340\": container with ID starting with 6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340 not found: ID does not exist" containerID="6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.717731 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340"} err="failed to get container status \"6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340\": rpc error: code = NotFound desc = could not find container \"6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340\": container with ID starting with 6b97a32753a5e9c5508fb3ec3c55c7b86b34c500f781ecd1d052a023e0fad340 not found: ID does not exist" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.717751 4772 scope.go:117] "RemoveContainer" containerID="150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857" Nov 22 11:00:42 crc kubenswrapper[4772]: E1122 11:00:42.718157 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857\": container with ID starting with 150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857 not found: ID does not exist" containerID="150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.718180 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857"} err="failed to get container status \"150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857\": rpc error: code = NotFound desc = could not find container \"150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857\": container with ID starting with 150d3713d1208fe373a8e9e7ada8273a9fe8b7d1043c77880f0343a304b3c857 not found: ID does not exist" Nov 22 11:00:42 crc kubenswrapper[4772]: I1122 11:00:42.720872 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-65d7b679bd-q7h6t"] Nov 22 11:00:43 crc kubenswrapper[4772]: I1122 11:00:43.423877 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" path="/var/lib/kubelet/pods/d3ece6c8-49c6-468f-b579-6bd9b2bea8bd/volumes" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.015635 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7q2nv"] Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.016118 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa05e69-3162-4e89-925c-dd99d6a35bba" containerName="mariadb-account-create" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016143 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa05e69-3162-4e89-925c-dd99d6a35bba" containerName="mariadb-account-create" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.016161 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-httpd" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016170 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-httpd" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.016183 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e3f53-4dc6-4e20-aa04-0e2a0ae14427" containerName="mariadb-account-create" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016194 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e3f53-4dc6-4e20-aa04-0e2a0ae14427" containerName="mariadb-account-create" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.016230 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-api" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016237 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-api" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016473 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e3f53-4dc6-4e20-aa04-0e2a0ae14427" containerName="mariadb-account-create" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016494 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-api" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016506 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa05e69-3162-4e89-925c-dd99d6a35bba" containerName="mariadb-account-create" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.016524 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ece6c8-49c6-468f-b579-6bd9b2bea8bd" containerName="neutron-httpd" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.017272 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.019292 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.020623 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.020770 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fnwl6" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.031856 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7q2nv"] Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.198318 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th45m\" (UniqueName: \"kubernetes.io/projected/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-kube-api-access-th45m\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.198408 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-config-data\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.198448 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.198511 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-scripts\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.312135 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th45m\" (UniqueName: \"kubernetes.io/projected/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-kube-api-access-th45m\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.312258 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-config-data\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.312308 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.312405 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-scripts\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.320148 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-scripts\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.327668 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-config-data\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.328227 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.351592 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th45m\" (UniqueName: \"kubernetes.io/projected/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-kube-api-access-th45m\") pod \"nova-cell0-conductor-db-sync-7q2nv\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.352592 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.495681 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.513992 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzgr\" (UniqueName: \"kubernetes.io/projected/17871b70-2c40-4198-bff5-51dd45433e3c-kube-api-access-4bzgr\") pod \"17871b70-2c40-4198-bff5-51dd45433e3c\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.514093 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-sg-core-conf-yaml\") pod \"17871b70-2c40-4198-bff5-51dd45433e3c\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.514185 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-scripts\") pod \"17871b70-2c40-4198-bff5-51dd45433e3c\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.514228 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-log-httpd\") pod \"17871b70-2c40-4198-bff5-51dd45433e3c\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.514953 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "17871b70-2c40-4198-bff5-51dd45433e3c" (UID: "17871b70-2c40-4198-bff5-51dd45433e3c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.515024 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-config-data\") pod \"17871b70-2c40-4198-bff5-51dd45433e3c\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.515127 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-run-httpd\") pod \"17871b70-2c40-4198-bff5-51dd45433e3c\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.515169 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-combined-ca-bundle\") pod \"17871b70-2c40-4198-bff5-51dd45433e3c\" (UID: \"17871b70-2c40-4198-bff5-51dd45433e3c\") " Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.515350 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "17871b70-2c40-4198-bff5-51dd45433e3c" (UID: "17871b70-2c40-4198-bff5-51dd45433e3c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.516925 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.516952 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17871b70-2c40-4198-bff5-51dd45433e3c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.520209 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-scripts" (OuterVolumeSpecName: "scripts") pod "17871b70-2c40-4198-bff5-51dd45433e3c" (UID: "17871b70-2c40-4198-bff5-51dd45433e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.520329 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17871b70-2c40-4198-bff5-51dd45433e3c-kube-api-access-4bzgr" (OuterVolumeSpecName: "kube-api-access-4bzgr") pod "17871b70-2c40-4198-bff5-51dd45433e3c" (UID: "17871b70-2c40-4198-bff5-51dd45433e3c"). InnerVolumeSpecName "kube-api-access-4bzgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.586033 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "17871b70-2c40-4198-bff5-51dd45433e3c" (UID: "17871b70-2c40-4198-bff5-51dd45433e3c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.627295 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bzgr\" (UniqueName: \"kubernetes.io/projected/17871b70-2c40-4198-bff5-51dd45433e3c-kube-api-access-4bzgr\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.628400 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.628507 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.642391 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17871b70-2c40-4198-bff5-51dd45433e3c" (UID: "17871b70-2c40-4198-bff5-51dd45433e3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.656310 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-config-data" (OuterVolumeSpecName: "config-data") pod "17871b70-2c40-4198-bff5-51dd45433e3c" (UID: "17871b70-2c40-4198-bff5-51dd45433e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.708889 4772 generic.go:334] "Generic (PLEG): container finished" podID="17871b70-2c40-4198-bff5-51dd45433e3c" containerID="2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd" exitCode=0 Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.708933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerDied","Data":"2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd"} Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.708959 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17871b70-2c40-4198-bff5-51dd45433e3c","Type":"ContainerDied","Data":"adec81dd7ec7ff4abdaf41327c70205aae7c88237d0cb10f5ecbc7142085bce2"} Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.708975 4772 scope.go:117] "RemoveContainer" containerID="5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.709114 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.730817 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.731127 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17871b70-2c40-4198-bff5-51dd45433e3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.758115 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.771479 4772 scope.go:117] "RemoveContainer" containerID="8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.771615 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.781140 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.781660 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-central-agent" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.781680 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-central-agent" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.781697 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="sg-core" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.781705 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="sg-core" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.781723 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="proxy-httpd" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.781731 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="proxy-httpd" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.781745 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-notification-agent" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.781753 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-notification-agent" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.782035 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-notification-agent" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.782074 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="sg-core" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.782096 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="proxy-httpd" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.782115 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" containerName="ceilometer-central-agent" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.784258 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.791657 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.791867 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.794336 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.822092 4772 scope.go:117] "RemoveContainer" containerID="4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.832416 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-scripts\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.832482 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-log-httpd\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.832657 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.832724 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prk4n\" (UniqueName: \"kubernetes.io/projected/c974841f-21ea-433b-aa7d-5dae406fbb6f-kube-api-access-prk4n\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.832997 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-config-data\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.833192 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.833308 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-run-httpd\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.846912 4772 scope.go:117] "RemoveContainer" containerID="2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.868328 4772 scope.go:117] "RemoveContainer" containerID="5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.868899 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744\": container with ID starting with 5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744 not found: ID does not exist" containerID="5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.868959 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744"} err="failed to get container status \"5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744\": rpc error: code = NotFound desc = could not find container \"5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744\": container with ID starting with 5ae69e6a6e26a877065a2314d04edb367d22e92aca224c1733e39c88f3067744 not found: ID does not exist" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.868995 4772 scope.go:117] "RemoveContainer" containerID="8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.870226 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2\": container with ID starting with 8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2 not found: ID does not exist" containerID="8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.870272 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2"} err="failed to get container status \"8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2\": rpc error: code = NotFound desc = could not find container \"8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2\": container with ID starting with 8b6f1d83ac62e4f0e8085738e26e775886bc1ecb90c7d40de152f737325aa4b2 not found: ID does not exist" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.870294 4772 scope.go:117] "RemoveContainer" containerID="4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.870568 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951\": container with ID starting with 4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951 not found: ID does not exist" containerID="4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.870597 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951"} err="failed to get container status \"4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951\": rpc error: code = NotFound desc = could not find container \"4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951\": container with ID starting with 4f51ea11a8c42ea46e7b65c22841d1447126c890aebfa848ed0545093df86951 not found: ID does not exist" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.870612 4772 scope.go:117] "RemoveContainer" containerID="2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd" Nov 22 11:00:44 crc kubenswrapper[4772]: E1122 11:00:44.870933 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd\": container with ID starting with 2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd not found: ID does not exist" containerID="2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.870989 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd"} err="failed to get container status \"2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd\": rpc error: code = NotFound desc = could not find container \"2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd\": container with ID starting with 2e6d98f54ea53eb0836efdb73e52335ebf168d0412666be2217aba72597c3efd not found: ID does not exist" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.902711 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7q2nv"] Nov 22 11:00:44 crc kubenswrapper[4772]: W1122 11:00:44.903064 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fb39191_1ec6_4ea4_84d4_8c4dc36f1031.slice/crio-4f5a2bf36b26af1000f7d069151545130e46e9f7ab1f0766bf798a888b08980b WatchSource:0}: Error finding container 4f5a2bf36b26af1000f7d069151545130e46e9f7ab1f0766bf798a888b08980b: Status 404 returned error can't find the container with id 4f5a2bf36b26af1000f7d069151545130e46e9f7ab1f0766bf798a888b08980b Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.935569 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.935626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prk4n\" (UniqueName: \"kubernetes.io/projected/c974841f-21ea-433b-aa7d-5dae406fbb6f-kube-api-access-prk4n\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.935709 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-config-data\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.935768 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.935814 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-run-httpd\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.935880 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-scripts\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.935911 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-log-httpd\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.936457 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-log-httpd\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.936506 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-run-httpd\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.938883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.939460 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-config-data\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.940279 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-scripts\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.940390 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:44 crc kubenswrapper[4772]: I1122 11:00:44.955829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prk4n\" (UniqueName: \"kubernetes.io/projected/c974841f-21ea-433b-aa7d-5dae406fbb6f-kube-api-access-prk4n\") pod \"ceilometer-0\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " pod="openstack/ceilometer-0" Nov 22 11:00:45 crc kubenswrapper[4772]: I1122 11:00:45.113514 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:45 crc kubenswrapper[4772]: I1122 11:00:45.422794 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17871b70-2c40-4198-bff5-51dd45433e3c" path="/var/lib/kubelet/pods/17871b70-2c40-4198-bff5-51dd45433e3c/volumes" Nov 22 11:00:45 crc kubenswrapper[4772]: I1122 11:00:45.575111 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:45 crc kubenswrapper[4772]: I1122 11:00:45.720848 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" event={"ID":"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031","Type":"ContainerStarted","Data":"4f5a2bf36b26af1000f7d069151545130e46e9f7ab1f0766bf798a888b08980b"} Nov 22 11:00:45 crc kubenswrapper[4772]: I1122 11:00:45.724327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerStarted","Data":"0f92118a4971f069d31cd06077d7758406c75a5b5e2f542afca974b5cce89b74"} Nov 22 11:00:46 crc kubenswrapper[4772]: I1122 11:00:46.756726 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerStarted","Data":"aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab"} Nov 22 11:00:47 crc kubenswrapper[4772]: I1122 11:00:47.768647 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerStarted","Data":"8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7"} Nov 22 11:00:47 crc kubenswrapper[4772]: I1122 11:00:47.769302 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerStarted","Data":"2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b"} Nov 22 11:00:52 crc kubenswrapper[4772]: I1122 11:00:52.821262 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerStarted","Data":"8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08"} Nov 22 11:00:52 crc kubenswrapper[4772]: I1122 11:00:52.822989 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 11:00:52 crc kubenswrapper[4772]: I1122 11:00:52.824344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" event={"ID":"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031","Type":"ContainerStarted","Data":"4f80aca9ab925ac5f7c357f391fb695a8032c9580d4de9e838c68a35fdefcdc3"} Nov 22 11:00:52 crc kubenswrapper[4772]: I1122 11:00:52.849870 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.445289021 podStartE2EDuration="8.849847149s" podCreationTimestamp="2025-11-22 11:00:44 +0000 UTC" firstStartedPulling="2025-11-22 11:00:45.592974862 +0000 UTC m=+1365.832419356" lastFinishedPulling="2025-11-22 11:00:51.99753299 +0000 UTC m=+1372.236977484" observedRunningTime="2025-11-22 11:00:52.842837735 +0000 UTC m=+1373.082282229" watchObservedRunningTime="2025-11-22 11:00:52.849847149 +0000 UTC m=+1373.089291643" Nov 22 11:00:52 crc kubenswrapper[4772]: I1122 11:00:52.868131 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" podStartSLOduration=2.780018138 podStartE2EDuration="9.868113883s" podCreationTimestamp="2025-11-22 11:00:43 +0000 UTC" firstStartedPulling="2025-11-22 11:00:44.906586224 +0000 UTC m=+1365.146030718" lastFinishedPulling="2025-11-22 11:00:51.994681969 +0000 UTC m=+1372.234126463" observedRunningTime="2025-11-22 11:00:52.859745205 +0000 UTC m=+1373.099189699" watchObservedRunningTime="2025-11-22 11:00:52.868113883 +0000 UTC m=+1373.107558377" Nov 22 11:00:53 crc kubenswrapper[4772]: I1122 11:00:53.896503 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:00:53 crc kubenswrapper[4772]: I1122 11:00:53.897177 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-log" containerID="cri-o://f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c" gracePeriod=30 Nov 22 11:00:53 crc kubenswrapper[4772]: I1122 11:00:53.897249 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-httpd" containerID="cri-o://2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3" gracePeriod=30 Nov 22 11:00:54 crc kubenswrapper[4772]: I1122 11:00:54.843132 4772 generic.go:334] "Generic (PLEG): container finished" podID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerID="f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c" exitCode=143 Nov 22 11:00:54 crc kubenswrapper[4772]: I1122 11:00:54.843219 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26cdd8cb-1271-4a41-83a7-782efd9a9aa7","Type":"ContainerDied","Data":"f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c"} Nov 22 11:00:55 crc kubenswrapper[4772]: I1122 11:00:55.330486 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:00:55 crc kubenswrapper[4772]: I1122 11:00:55.330748 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-log" containerID="cri-o://14231a0f6ec640aa6c40dfaa0db95ae4ea80eaaa3fc4ca5b123ac6999477d1f3" gracePeriod=30 Nov 22 11:00:55 crc kubenswrapper[4772]: I1122 11:00:55.330815 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-httpd" containerID="cri-o://e510026a5b066cf92da22755c75a394db34caee61cb17079473b6dcce63ca369" gracePeriod=30 Nov 22 11:00:55 crc kubenswrapper[4772]: I1122 11:00:55.856565 4772 generic.go:334] "Generic (PLEG): container finished" podID="06f66afb-564e-442a-b833-2d6db747986f" containerID="14231a0f6ec640aa6c40dfaa0db95ae4ea80eaaa3fc4ca5b123ac6999477d1f3" exitCode=143 Nov 22 11:00:55 crc kubenswrapper[4772]: I1122 11:00:55.856615 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06f66afb-564e-442a-b833-2d6db747986f","Type":"ContainerDied","Data":"14231a0f6ec640aa6c40dfaa0db95ae4ea80eaaa3fc4ca5b123ac6999477d1f3"} Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.209393 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.209852 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-central-agent" containerID="cri-o://aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab" gracePeriod=30 Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.209890 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="proxy-httpd" containerID="cri-o://8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08" gracePeriod=30 Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.209923 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="sg-core" containerID="cri-o://8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7" gracePeriod=30 Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.209916 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-notification-agent" containerID="cri-o://2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b" gracePeriod=30 Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.869651 4772 generic.go:334] "Generic (PLEG): container finished" podID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerID="8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08" exitCode=0 Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.869992 4772 generic.go:334] "Generic (PLEG): container finished" podID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerID="8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7" exitCode=2 Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.870007 4772 generic.go:334] "Generic (PLEG): container finished" podID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerID="aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab" exitCode=0 Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.869728 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerDied","Data":"8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08"} Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.870063 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerDied","Data":"8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7"} Nov 22 11:00:56 crc kubenswrapper[4772]: I1122 11:00:56.870084 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerDied","Data":"aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab"} Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.418413 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.524932 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.576657 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-scripts\") pod \"c974841f-21ea-433b-aa7d-5dae406fbb6f\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.576746 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prk4n\" (UniqueName: \"kubernetes.io/projected/c974841f-21ea-433b-aa7d-5dae406fbb6f-kube-api-access-prk4n\") pod \"c974841f-21ea-433b-aa7d-5dae406fbb6f\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.576777 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-run-httpd\") pod \"c974841f-21ea-433b-aa7d-5dae406fbb6f\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.576803 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-config-data\") pod \"c974841f-21ea-433b-aa7d-5dae406fbb6f\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.576894 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-sg-core-conf-yaml\") pod \"c974841f-21ea-433b-aa7d-5dae406fbb6f\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.576940 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-combined-ca-bundle\") pod \"c974841f-21ea-433b-aa7d-5dae406fbb6f\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.576992 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-log-httpd\") pod \"c974841f-21ea-433b-aa7d-5dae406fbb6f\" (UID: \"c974841f-21ea-433b-aa7d-5dae406fbb6f\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.577592 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c974841f-21ea-433b-aa7d-5dae406fbb6f" (UID: "c974841f-21ea-433b-aa7d-5dae406fbb6f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.578380 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.578749 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c974841f-21ea-433b-aa7d-5dae406fbb6f" (UID: "c974841f-21ea-433b-aa7d-5dae406fbb6f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.584531 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c974841f-21ea-433b-aa7d-5dae406fbb6f-kube-api-access-prk4n" (OuterVolumeSpecName: "kube-api-access-prk4n") pod "c974841f-21ea-433b-aa7d-5dae406fbb6f" (UID: "c974841f-21ea-433b-aa7d-5dae406fbb6f"). InnerVolumeSpecName "kube-api-access-prk4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.584684 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-scripts" (OuterVolumeSpecName: "scripts") pod "c974841f-21ea-433b-aa7d-5dae406fbb6f" (UID: "c974841f-21ea-433b-aa7d-5dae406fbb6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.618675 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c974841f-21ea-433b-aa7d-5dae406fbb6f" (UID: "c974841f-21ea-433b-aa7d-5dae406fbb6f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.671483 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c974841f-21ea-433b-aa7d-5dae406fbb6f" (UID: "c974841f-21ea-433b-aa7d-5dae406fbb6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.679642 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szcz2\" (UniqueName: \"kubernetes.io/projected/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-kube-api-access-szcz2\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.679758 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-config-data\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.679778 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.679818 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-logs\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.679875 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-httpd-run\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.679976 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-combined-ca-bundle\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680025 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-public-tls-certs\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680070 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-scripts\") pod \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\" (UID: \"26cdd8cb-1271-4a41-83a7-782efd9a9aa7\") " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680793 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680814 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680827 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prk4n\" (UniqueName: \"kubernetes.io/projected/c974841f-21ea-433b-aa7d-5dae406fbb6f-kube-api-access-prk4n\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680842 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680855 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.680867 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c974841f-21ea-433b-aa7d-5dae406fbb6f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.681003 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-logs" (OuterVolumeSpecName: "logs") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.684820 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-scripts" (OuterVolumeSpecName: "scripts") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.685143 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.685323 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-kube-api-access-szcz2" (OuterVolumeSpecName: "kube-api-access-szcz2") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "kube-api-access-szcz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.716766 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.747981 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-config-data" (OuterVolumeSpecName: "config-data") pod "c974841f-21ea-433b-aa7d-5dae406fbb6f" (UID: "c974841f-21ea-433b-aa7d-5dae406fbb6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.748732 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-config-data" (OuterVolumeSpecName: "config-data") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.750190 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "26cdd8cb-1271-4a41-83a7-782efd9a9aa7" (UID: "26cdd8cb-1271-4a41-83a7-782efd9a9aa7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.782710 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szcz2\" (UniqueName: \"kubernetes.io/projected/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-kube-api-access-szcz2\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.783157 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.783261 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.783324 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.783381 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c974841f-21ea-433b-aa7d-5dae406fbb6f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.783434 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.783496 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.783556 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26cdd8cb-1271-4a41-83a7-782efd9a9aa7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.805950 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.880010 4772 generic.go:334] "Generic (PLEG): container finished" podID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerID="2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3" exitCode=0 Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.880122 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.880336 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26cdd8cb-1271-4a41-83a7-782efd9a9aa7","Type":"ContainerDied","Data":"2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3"} Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.880656 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"26cdd8cb-1271-4a41-83a7-782efd9a9aa7","Type":"ContainerDied","Data":"ae3e325659dd192f8b1972ef3f3799b3f59c2dff4de08cdec9cac5accfc7d87c"} Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.880679 4772 scope.go:117] "RemoveContainer" containerID="2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.884978 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.886757 4772 generic.go:334] "Generic (PLEG): container finished" podID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerID="2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b" exitCode=0 Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.886801 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.886835 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerDied","Data":"2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b"} Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.887098 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c974841f-21ea-433b-aa7d-5dae406fbb6f","Type":"ContainerDied","Data":"0f92118a4971f069d31cd06077d7758406c75a5b5e2f542afca974b5cce89b74"} Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.914432 4772 scope.go:117] "RemoveContainer" containerID="f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.961870 4772 scope.go:117] "RemoveContainer" containerID="2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.970110 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:00:57 crc kubenswrapper[4772]: E1122 11:00:57.975202 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3\": container with ID starting with 2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3 not found: ID does not exist" containerID="2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.977478 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3"} err="failed to get container status \"2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3\": rpc error: code = NotFound desc = could not find container \"2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3\": container with ID starting with 2e86c9ac6c57c96e25c3703e0436d75b21786c9a0735485a32d771f2e43a38d3 not found: ID does not exist" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.977515 4772 scope.go:117] "RemoveContainer" containerID="f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c" Nov 22 11:00:57 crc kubenswrapper[4772]: E1122 11:00:57.978218 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c\": container with ID starting with f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c not found: ID does not exist" containerID="f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.978323 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c"} err="failed to get container status \"f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c\": rpc error: code = NotFound desc = could not find container \"f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c\": container with ID starting with f538511ac555dba1a8bdfb53b2373b5e77e1dd3f3ae4735c98d0a2e7f1e6e46c not found: ID does not exist" Nov 22 11:00:57 crc kubenswrapper[4772]: I1122 11:00:57.978360 4772 scope.go:117] "RemoveContainer" containerID="8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.009113 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.019165 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.019693 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-httpd" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.019717 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-httpd" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.019735 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-central-agent" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.019745 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-central-agent" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.019775 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="proxy-httpd" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.019785 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="proxy-httpd" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.019798 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="sg-core" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.019806 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="sg-core" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.019830 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-notification-agent" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.019840 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-notification-agent" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.019857 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-log" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.019868 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-log" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.021334 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="sg-core" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.021363 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-central-agent" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.021379 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="proxy-httpd" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.021392 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" containerName="ceilometer-notification-agent" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.021409 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-log" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.021422 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" containerName="glance-httpd" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.022925 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.027237 4772 scope.go:117] "RemoveContainer" containerID="8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.029408 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.032807 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.037093 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.049531 4772 scope.go:117] "RemoveContainer" containerID="2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.069182 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.082118 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.092273 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.095616 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.101492 4772 scope.go:117] "RemoveContainer" containerID="aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.102012 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.102111 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.102306 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.170337 4772 scope.go:117] "RemoveContainer" containerID="8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.170988 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08\": container with ID starting with 8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08 not found: ID does not exist" containerID="8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.171059 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08"} err="failed to get container status \"8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08\": rpc error: code = NotFound desc = could not find container \"8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08\": container with ID starting with 8fde901fd914927a5aec2d5c53060b84e4a71bc4242a3411694161206f249f08 not found: ID does not exist" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.171093 4772 scope.go:117] "RemoveContainer" containerID="8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.171446 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7\": container with ID starting with 8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7 not found: ID does not exist" containerID="8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.171502 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7"} err="failed to get container status \"8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7\": rpc error: code = NotFound desc = could not find container \"8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7\": container with ID starting with 8ab3409b30662f2cb79565e51e60bdd982264c960e28df6ccdc6f64d4e1529f7 not found: ID does not exist" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.171539 4772 scope.go:117] "RemoveContainer" containerID="2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.171846 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b\": container with ID starting with 2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b not found: ID does not exist" containerID="2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.171882 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b"} err="failed to get container status \"2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b\": rpc error: code = NotFound desc = could not find container \"2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b\": container with ID starting with 2a437fea0b3bd3ea5bec7379fb5a0e31d1f9ea53426806a9d738c261cbbeaf1b not found: ID does not exist" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.171904 4772 scope.go:117] "RemoveContainer" containerID="aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab" Nov 22 11:00:58 crc kubenswrapper[4772]: E1122 11:00:58.172221 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab\": container with ID starting with aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab not found: ID does not exist" containerID="aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.172264 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab"} err="failed to get container status \"aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab\": rpc error: code = NotFound desc = could not find container \"aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab\": container with ID starting with aca868597287b5defbb98a014476516d0b11b0989e12201ca0aec281733714ab not found: ID does not exist" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195425 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195540 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdt7g\" (UniqueName: \"kubernetes.io/projected/14ed2945-ef18-49de-9c18-679e011d3df5-kube-api-access-cdt7g\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195695 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195753 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-scripts\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195788 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195814 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195833 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195944 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.195975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.196149 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-config-data\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.196182 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqps\" (UniqueName: \"kubernetes.io/projected/492839a5-207f-4770-9335-1117c1c33fe7-kube-api-access-8qqps\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.196199 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.196218 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-scripts\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.196263 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-logs\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.196308 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-config-data\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-config-data\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297830 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqps\" (UniqueName: \"kubernetes.io/projected/492839a5-207f-4770-9335-1117c1c33fe7-kube-api-access-8qqps\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297848 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297867 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-scripts\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297893 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-logs\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297916 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-config-data\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297963 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.297987 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdt7g\" (UniqueName: \"kubernetes.io/projected/14ed2945-ef18-49de-9c18-679e011d3df5-kube-api-access-cdt7g\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.298015 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.298032 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-scripts\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.298069 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.298088 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.298114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.298177 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.298198 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.299026 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-logs\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.299032 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.299068 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.299408 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.299454 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.302405 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-scripts\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.302834 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-scripts\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.302868 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-config-data\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.303349 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.303582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.303730 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-config-data\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.305217 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.318825 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.319941 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdt7g\" (UniqueName: \"kubernetes.io/projected/14ed2945-ef18-49de-9c18-679e011d3df5-kube-api-access-cdt7g\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.327543 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqps\" (UniqueName: \"kubernetes.io/projected/492839a5-207f-4770-9335-1117c1c33fe7-kube-api-access-8qqps\") pod \"ceilometer-0\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.347519 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.355025 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.429334 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.904182 4772 generic.go:334] "Generic (PLEG): container finished" podID="06f66afb-564e-442a-b833-2d6db747986f" containerID="e510026a5b066cf92da22755c75a394db34caee61cb17079473b6dcce63ca369" exitCode=0 Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.904291 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06f66afb-564e-442a-b833-2d6db747986f","Type":"ContainerDied","Data":"e510026a5b066cf92da22755c75a394db34caee61cb17079473b6dcce63ca369"} Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.926609 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:00:58 crc kubenswrapper[4772]: I1122 11:00:58.987156 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.441958 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cdd8cb-1271-4a41-83a7-782efd9a9aa7" path="/var/lib/kubelet/pods/26cdd8cb-1271-4a41-83a7-782efd9a9aa7/volumes" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.443194 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c974841f-21ea-433b-aa7d-5dae406fbb6f" path="/var/lib/kubelet/pods/c974841f-21ea-433b-aa7d-5dae406fbb6f/volumes" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.748358 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828322 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-scripts\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828375 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-logs\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828515 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-config-data\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828539 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828567 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-combined-ca-bundle\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828616 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-internal-tls-certs\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828652 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-httpd-run\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.828708 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9tx5\" (UniqueName: \"kubernetes.io/projected/06f66afb-564e-442a-b833-2d6db747986f-kube-api-access-l9tx5\") pod \"06f66afb-564e-442a-b833-2d6db747986f\" (UID: \"06f66afb-564e-442a-b833-2d6db747986f\") " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.834350 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-logs" (OuterVolumeSpecName: "logs") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.835195 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.838340 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.837683 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f66afb-564e-442a-b833-2d6db747986f-kube-api-access-l9tx5" (OuterVolumeSpecName: "kube-api-access-l9tx5") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "kube-api-access-l9tx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.841961 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-scripts" (OuterVolumeSpecName: "scripts") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.881562 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.923300 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-config-data" (OuterVolumeSpecName: "config-data") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.926641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "06f66afb-564e-442a-b833-2d6db747986f" (UID: "06f66afb-564e-442a-b833-2d6db747986f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.930382 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"06f66afb-564e-442a-b833-2d6db747986f","Type":"ContainerDied","Data":"aad143134eb589a1a9361b04f1808f205b940d12f5bdf8bad9a1c5eca70ab188"} Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.930442 4772 scope.go:117] "RemoveContainer" containerID="e510026a5b066cf92da22755c75a394db34caee61cb17079473b6dcce63ca369" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.930576 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.931834 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.931866 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.931877 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.931916 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.932004 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.932037 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06f66afb-564e-442a-b833-2d6db747986f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.932062 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06f66afb-564e-442a-b833-2d6db747986f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.932119 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9tx5\" (UniqueName: \"kubernetes.io/projected/06f66afb-564e-442a-b833-2d6db747986f-kube-api-access-l9tx5\") on node \"crc\" DevicePath \"\"" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.935875 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerStarted","Data":"1d4a98ffaa9b021106018898120ea752451677dcb45f0c6f8e2be16d8f1b2b6f"} Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.940379 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14ed2945-ef18-49de-9c18-679e011d3df5","Type":"ContainerStarted","Data":"88ecd0459f0ac9488f0cd3eb8c402462803c773cf6ef7940b9aa2db2abf09dea"} Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.940527 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14ed2945-ef18-49de-9c18-679e011d3df5","Type":"ContainerStarted","Data":"1c3f5bced2f15e83de4da4eba1c5544977260d6d5e2b4f4ca22c417c52946598"} Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.971541 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.989680 4772 scope.go:117] "RemoveContainer" containerID="14231a0f6ec640aa6c40dfaa0db95ae4ea80eaaa3fc4ca5b123ac6999477d1f3" Nov 22 11:00:59 crc kubenswrapper[4772]: I1122 11:00:59.996668 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.008796 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.036889 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:01:00 crc kubenswrapper[4772]: E1122 11:01:00.038113 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-httpd" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.038129 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-httpd" Nov 22 11:01:00 crc kubenswrapper[4772]: E1122 11:01:00.038146 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-log" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.038152 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-log" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.038678 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-httpd" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.038706 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f66afb-564e-442a-b833-2d6db747986f" containerName="glance-log" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.041278 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.060480 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.064313 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.064498 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.064855 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.142870 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.142936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.142976 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.143003 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.143081 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.143118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.143198 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.143249 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prsb8\" (UniqueName: \"kubernetes.io/projected/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-kube-api-access-prsb8\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.145981 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29396821-s9nxs"] Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.150382 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.161118 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396821-s9nxs"] Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.246508 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhzw\" (UniqueName: \"kubernetes.io/projected/b998bcc0-7358-4f93-9584-b1b99829108f-kube-api-access-7lhzw\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.246624 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-combined-ca-bundle\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.246701 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247514 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prsb8\" (UniqueName: \"kubernetes.io/projected/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-kube-api-access-prsb8\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247602 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-fernet-keys\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247633 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247805 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-config-data\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247847 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.247943 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.248018 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.248506 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.249061 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.251888 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.252447 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.254690 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.269436 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.269456 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prsb8\" (UniqueName: \"kubernetes.io/projected/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-kube-api-access-prsb8\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.284975 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.349872 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-combined-ca-bundle\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.350025 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-fernet-keys\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.350163 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-config-data\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.350263 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhzw\" (UniqueName: \"kubernetes.io/projected/b998bcc0-7358-4f93-9584-b1b99829108f-kube-api-access-7lhzw\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.355486 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-fernet-keys\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.356401 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-combined-ca-bundle\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.362661 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-config-data\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.369458 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhzw\" (UniqueName: \"kubernetes.io/projected/b998bcc0-7358-4f93-9584-b1b99829108f-kube-api-access-7lhzw\") pod \"keystone-cron-29396821-s9nxs\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.404835 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.486903 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.951769 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14ed2945-ef18-49de-9c18-679e011d3df5","Type":"ContainerStarted","Data":"dd542af28bce5c278e708a047b0757d9812c5e11e9dc0dff83889ad014c4b497"} Nov 22 11:01:00 crc kubenswrapper[4772]: I1122 11:01:00.976897 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.97687548 podStartE2EDuration="3.97687548s" podCreationTimestamp="2025-11-22 11:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:00.97406115 +0000 UTC m=+1381.213505664" watchObservedRunningTime="2025-11-22 11:01:00.97687548 +0000 UTC m=+1381.216319974" Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.426813 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f66afb-564e-442a-b833-2d6db747986f" path="/var/lib/kubelet/pods/06f66afb-564e-442a-b833-2d6db747986f/volumes" Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.532853 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.532918 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.556356 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.598932 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396821-s9nxs"] Nov 22 11:01:01 crc kubenswrapper[4772]: W1122 11:01:01.612214 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb998bcc0_7358_4f93_9584_b1b99829108f.slice/crio-253466ed61152c5c4abc39a3f6f0e4baa0bb1b982231a728fc6bf0c303aabc57 WatchSource:0}: Error finding container 253466ed61152c5c4abc39a3f6f0e4baa0bb1b982231a728fc6bf0c303aabc57: Status 404 returned error can't find the container with id 253466ed61152c5c4abc39a3f6f0e4baa0bb1b982231a728fc6bf0c303aabc57 Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.960742 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50","Type":"ContainerStarted","Data":"0908d1f79b72482026c58a702cd21bf200bf6a3eb6efd4c6e210385b1b85d1fc"} Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.962981 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396821-s9nxs" event={"ID":"b998bcc0-7358-4f93-9584-b1b99829108f","Type":"ContainerStarted","Data":"253466ed61152c5c4abc39a3f6f0e4baa0bb1b982231a728fc6bf0c303aabc57"} Nov 22 11:01:01 crc kubenswrapper[4772]: I1122 11:01:01.964489 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerStarted","Data":"46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f"} Nov 22 11:01:07 crc kubenswrapper[4772]: I1122 11:01:07.011692 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396821-s9nxs" event={"ID":"b998bcc0-7358-4f93-9584-b1b99829108f","Type":"ContainerStarted","Data":"500a054f9f59bed80e21a830df2e4586802b8b2159197cb99902d460e8a2cdb9"} Nov 22 11:01:07 crc kubenswrapper[4772]: I1122 11:01:07.013566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50","Type":"ContainerStarted","Data":"d2ff64d96dfba7abbcacbdddbc68b2ab55e205bcc9422fdf3d5388dd6cf5273f"} Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.023659 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50","Type":"ContainerStarted","Data":"59510dd7b9579831eb08695d39ab17e9efd8ec1346988dbb2e6af8437f7ad097"} Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.028550 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerStarted","Data":"883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4"} Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.066062 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.066026863 podStartE2EDuration="9.066026863s" podCreationTimestamp="2025-11-22 11:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:08.052720022 +0000 UTC m=+1388.292164526" watchObservedRunningTime="2025-11-22 11:01:08.066026863 +0000 UTC m=+1388.305471357" Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.097563 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29396821-s9nxs" podStartSLOduration=8.097541475 podStartE2EDuration="8.097541475s" podCreationTimestamp="2025-11-22 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:08.094339286 +0000 UTC m=+1388.333783790" watchObservedRunningTime="2025-11-22 11:01:08.097541475 +0000 UTC m=+1388.336985959" Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.355465 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.355513 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.391960 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 11:01:08 crc kubenswrapper[4772]: I1122 11:01:08.406796 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 11:01:09 crc kubenswrapper[4772]: I1122 11:01:09.043263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerStarted","Data":"988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41"} Nov 22 11:01:09 crc kubenswrapper[4772]: I1122 11:01:09.045024 4772 generic.go:334] "Generic (PLEG): container finished" podID="b998bcc0-7358-4f93-9584-b1b99829108f" containerID="500a054f9f59bed80e21a830df2e4586802b8b2159197cb99902d460e8a2cdb9" exitCode=0 Nov 22 11:01:09 crc kubenswrapper[4772]: I1122 11:01:09.045315 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396821-s9nxs" event={"ID":"b998bcc0-7358-4f93-9584-b1b99829108f","Type":"ContainerDied","Data":"500a054f9f59bed80e21a830df2e4586802b8b2159197cb99902d460e8a2cdb9"} Nov 22 11:01:09 crc kubenswrapper[4772]: I1122 11:01:09.045522 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 11:01:09 crc kubenswrapper[4772]: I1122 11:01:09.045951 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.064702 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerStarted","Data":"766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a"} Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.065284 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.091327 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.428974222 podStartE2EDuration="13.091313994s" podCreationTimestamp="2025-11-22 11:00:57 +0000 UTC" firstStartedPulling="2025-11-22 11:00:58.994736888 +0000 UTC m=+1379.234181382" lastFinishedPulling="2025-11-22 11:01:09.65707666 +0000 UTC m=+1389.896521154" observedRunningTime="2025-11-22 11:01:10.088609786 +0000 UTC m=+1390.328054280" watchObservedRunningTime="2025-11-22 11:01:10.091313994 +0000 UTC m=+1390.330758488" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.405552 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.405597 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.440756 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.451378 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.453504 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.554195 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-config-data\") pod \"b998bcc0-7358-4f93-9584-b1b99829108f\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.554274 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-fernet-keys\") pod \"b998bcc0-7358-4f93-9584-b1b99829108f\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.554298 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-combined-ca-bundle\") pod \"b998bcc0-7358-4f93-9584-b1b99829108f\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.554376 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhzw\" (UniqueName: \"kubernetes.io/projected/b998bcc0-7358-4f93-9584-b1b99829108f-kube-api-access-7lhzw\") pod \"b998bcc0-7358-4f93-9584-b1b99829108f\" (UID: \"b998bcc0-7358-4f93-9584-b1b99829108f\") " Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.560019 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b998bcc0-7358-4f93-9584-b1b99829108f-kube-api-access-7lhzw" (OuterVolumeSpecName: "kube-api-access-7lhzw") pod "b998bcc0-7358-4f93-9584-b1b99829108f" (UID: "b998bcc0-7358-4f93-9584-b1b99829108f"). InnerVolumeSpecName "kube-api-access-7lhzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.560438 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b998bcc0-7358-4f93-9584-b1b99829108f" (UID: "b998bcc0-7358-4f93-9584-b1b99829108f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.583007 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b998bcc0-7358-4f93-9584-b1b99829108f" (UID: "b998bcc0-7358-4f93-9584-b1b99829108f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.601092 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-config-data" (OuterVolumeSpecName: "config-data") pod "b998bcc0-7358-4f93-9584-b1b99829108f" (UID: "b998bcc0-7358-4f93-9584-b1b99829108f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.656538 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhzw\" (UniqueName: \"kubernetes.io/projected/b998bcc0-7358-4f93-9584-b1b99829108f-kube-api-access-7lhzw\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.656566 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.656577 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:10 crc kubenswrapper[4772]: I1122 11:01:10.656588 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b998bcc0-7358-4f93-9584-b1b99829108f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.043531 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.077656 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.078512 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396821-s9nxs" Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.085308 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396821-s9nxs" event={"ID":"b998bcc0-7358-4f93-9584-b1b99829108f","Type":"ContainerDied","Data":"253466ed61152c5c4abc39a3f6f0e4baa0bb1b982231a728fc6bf0c303aabc57"} Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.085373 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253466ed61152c5c4abc39a3f6f0e4baa0bb1b982231a728fc6bf0c303aabc57" Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.085894 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.085956 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:11 crc kubenswrapper[4772]: I1122 11:01:11.225623 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 11:01:13 crc kubenswrapper[4772]: I1122 11:01:13.014217 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:13 crc kubenswrapper[4772]: I1122 11:01:13.991719 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 11:01:14 crc kubenswrapper[4772]: I1122 11:01:14.109099 4772 generic.go:334] "Generic (PLEG): container finished" podID="0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" containerID="4f80aca9ab925ac5f7c357f391fb695a8032c9580d4de9e838c68a35fdefcdc3" exitCode=0 Nov 22 11:01:14 crc kubenswrapper[4772]: I1122 11:01:14.109169 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" event={"ID":"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031","Type":"ContainerDied","Data":"4f80aca9ab925ac5f7c357f391fb695a8032c9580d4de9e838c68a35fdefcdc3"} Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.518691 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.564915 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-combined-ca-bundle\") pod \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.566592 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-config-data\") pod \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.567074 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-scripts\") pod \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.567121 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th45m\" (UniqueName: \"kubernetes.io/projected/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-kube-api-access-th45m\") pod \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\" (UID: \"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031\") " Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.573206 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-scripts" (OuterVolumeSpecName: "scripts") pod "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" (UID: "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.574298 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-kube-api-access-th45m" (OuterVolumeSpecName: "kube-api-access-th45m") pod "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" (UID: "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031"). InnerVolumeSpecName "kube-api-access-th45m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.592740 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" (UID: "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.592970 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-config-data" (OuterVolumeSpecName: "config-data") pod "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" (UID: "0fb39191-1ec6-4ea4-84d4-8c4dc36f1031"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.669220 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.669248 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th45m\" (UniqueName: \"kubernetes.io/projected/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-kube-api-access-th45m\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.669258 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:15 crc kubenswrapper[4772]: I1122 11:01:15.669267 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.128489 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" event={"ID":"0fb39191-1ec6-4ea4-84d4-8c4dc36f1031","Type":"ContainerDied","Data":"4f5a2bf36b26af1000f7d069151545130e46e9f7ab1f0766bf798a888b08980b"} Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.128537 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5a2bf36b26af1000f7d069151545130e46e9f7ab1f0766bf798a888b08980b" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.128855 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7q2nv" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.235752 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 11:01:16 crc kubenswrapper[4772]: E1122 11:01:16.236244 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" containerName="nova-cell0-conductor-db-sync" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.236263 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" containerName="nova-cell0-conductor-db-sync" Nov 22 11:01:16 crc kubenswrapper[4772]: E1122 11:01:16.236283 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b998bcc0-7358-4f93-9584-b1b99829108f" containerName="keystone-cron" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.236290 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b998bcc0-7358-4f93-9584-b1b99829108f" containerName="keystone-cron" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.236531 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b998bcc0-7358-4f93-9584-b1b99829108f" containerName="keystone-cron" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.236563 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" containerName="nova-cell0-conductor-db-sync" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.237383 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.239901 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.240112 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fnwl6" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.256471 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.385192 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.385252 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.385316 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9nvs\" (UniqueName: \"kubernetes.io/projected/33b26633-94ac-4439-b1ab-ab225d2e562b-kube-api-access-g9nvs\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.489196 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9nvs\" (UniqueName: \"kubernetes.io/projected/33b26633-94ac-4439-b1ab-ab225d2e562b-kube-api-access-g9nvs\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.489421 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.489506 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.496595 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.497849 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.509032 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9nvs\" (UniqueName: \"kubernetes.io/projected/33b26633-94ac-4439-b1ab-ab225d2e562b-kube-api-access-g9nvs\") pod \"nova-cell0-conductor-0\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:16 crc kubenswrapper[4772]: I1122 11:01:16.565796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:17 crc kubenswrapper[4772]: I1122 11:01:17.002759 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 11:01:17 crc kubenswrapper[4772]: I1122 11:01:17.137104 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"33b26633-94ac-4439-b1ab-ab225d2e562b","Type":"ContainerStarted","Data":"cdefe172ea5ef24ff2edbeb32a6b1a6f62488c1e44102ccfed8a9bd20edbd16e"} Nov 22 11:01:18 crc kubenswrapper[4772]: I1122 11:01:18.148126 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"33b26633-94ac-4439-b1ab-ab225d2e562b","Type":"ContainerStarted","Data":"08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9"} Nov 22 11:01:18 crc kubenswrapper[4772]: I1122 11:01:18.149940 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:18 crc kubenswrapper[4772]: I1122 11:01:18.171560 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.1715372 podStartE2EDuration="2.1715372s" podCreationTimestamp="2025-11-22 11:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:18.168866063 +0000 UTC m=+1398.408310547" watchObservedRunningTime="2025-11-22 11:01:18.1715372 +0000 UTC m=+1398.410981694" Nov 22 11:01:26 crc kubenswrapper[4772]: I1122 11:01:26.591423 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.067995 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mktsv"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.069680 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.072011 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.073435 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.083105 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mktsv"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.182967 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.183159 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8grh\" (UniqueName: \"kubernetes.io/projected/bc50fc65-ff5f-43d6-945b-d52d8535ccde-kube-api-access-n8grh\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.183207 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-scripts\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.183419 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-config-data\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.215726 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.217317 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.221239 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.233706 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.285265 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.285333 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-config-data\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.285379 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8grh\" (UniqueName: \"kubernetes.io/projected/bc50fc65-ff5f-43d6-945b-d52d8535ccde-kube-api-access-n8grh\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.285401 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-scripts\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.285456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.285498 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx46f\" (UniqueName: \"kubernetes.io/projected/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-kube-api-access-fx46f\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.285528 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-config-data\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.290494 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.292366 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.292464 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.300124 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.300540 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-scripts\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.301972 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-config-data\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.317771 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.368542 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8grh\" (UniqueName: \"kubernetes.io/projected/bc50fc65-ff5f-43d6-945b-d52d8535ccde-kube-api-access-n8grh\") pod \"nova-cell0-cell-mapping-mktsv\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.388082 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-config-data\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.388419 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q5g4\" (UniqueName: \"kubernetes.io/projected/a35672ad-5999-4891-972a-1461f9928b94-kube-api-access-6q5g4\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.388459 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-config-data\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.388479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.388510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.388528 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx46f\" (UniqueName: \"kubernetes.io/projected/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-kube-api-access-fx46f\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.388556 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35672ad-5999-4891-972a-1461f9928b94-logs\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.394449 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.396953 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.397652 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-config-data\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.414514 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.419381 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.434813 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.446999 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx46f\" (UniqueName: \"kubernetes.io/projected/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-kube-api-access-fx46f\") pod \"nova-scheduler-0\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.483551 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493175 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhdm\" (UniqueName: \"kubernetes.io/projected/2e3a581f-ea17-4285-809c-e2a662710ed7-kube-api-access-xzhdm\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493292 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-config-data\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493353 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q5g4\" (UniqueName: \"kubernetes.io/projected/a35672ad-5999-4891-972a-1461f9928b94-kube-api-access-6q5g4\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493426 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-config-data\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493497 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493552 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35672ad-5999-4891-972a-1461f9928b94-logs\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493577 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.493607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3a581f-ea17-4285-809c-e2a662710ed7-logs\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.499371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35672ad-5999-4891-972a-1461f9928b94-logs\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.530789 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-config-data\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.538218 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.541803 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.562476 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q5g4\" (UniqueName: \"kubernetes.io/projected/a35672ad-5999-4891-972a-1461f9928b94-kube-api-access-6q5g4\") pod \"nova-api-0\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.596684 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3a581f-ea17-4285-809c-e2a662710ed7-logs\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.596811 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzhdm\" (UniqueName: \"kubernetes.io/projected/2e3a581f-ea17-4285-809c-e2a662710ed7-kube-api-access-xzhdm\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.596939 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-config-data\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.597277 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.598528 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3a581f-ea17-4285-809c-e2a662710ed7-logs\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.599271 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.603812 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.604606 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-config-data\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.623754 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzhdm\" (UniqueName: \"kubernetes.io/projected/2e3a581f-ea17-4285-809c-e2a662710ed7-kube-api-access-xzhdm\") pod \"nova-metadata-0\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.674116 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.675740 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.680565 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.687707 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-z97q7"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.690183 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.699250 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.699360 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.699423 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ng8m\" (UniqueName: \"kubernetes.io/projected/bd173c58-6b77-43c7-b3de-6b358dab5df2-kube-api-access-7ng8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.719227 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.730396 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-z97q7"] Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.804190 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-config\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.804246 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ng8m\" (UniqueName: \"kubernetes.io/projected/bd173c58-6b77-43c7-b3de-6b358dab5df2-kube-api-access-7ng8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.804337 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.810355 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.810495 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6xl\" (UniqueName: \"kubernetes.io/projected/7e0a890a-43a4-4264-8347-9c7421a368d5-kube-api-access-dz6xl\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.810599 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.815643 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.815773 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.815809 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.819923 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.821189 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.829632 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ng8m\" (UniqueName: \"kubernetes.io/projected/bd173c58-6b77-43c7-b3de-6b358dab5df2-kube-api-access-7ng8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.899442 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.917754 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.917805 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.917829 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6xl\" (UniqueName: \"kubernetes.io/projected/7e0a890a-43a4-4264-8347-9c7421a368d5-kube-api-access-dz6xl\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.917898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.917967 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.918040 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-config\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.918749 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.918891 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.918894 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.919000 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.919161 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-config\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.933883 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6xl\" (UniqueName: \"kubernetes.io/projected/7e0a890a-43a4-4264-8347-9c7421a368d5-kube-api-access-dz6xl\") pod \"dnsmasq-dns-845d6d6f59-z97q7\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:27 crc kubenswrapper[4772]: I1122 11:01:27.999157 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.021519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.124619 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mktsv"] Nov 22 11:01:28 crc kubenswrapper[4772]: W1122 11:01:28.151265 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc50fc65_ff5f_43d6_945b_d52d8535ccde.slice/crio-f6323674c0bc33c3f665bb6d419038e96eb64e58d1fa3bb2375973d8dd4adbe4 WatchSource:0}: Error finding container f6323674c0bc33c3f665bb6d419038e96eb64e58d1fa3bb2375973d8dd4adbe4: Status 404 returned error can't find the container with id f6323674c0bc33c3f665bb6d419038e96eb64e58d1fa3bb2375973d8dd4adbe4 Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.232400 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-669bd"] Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.234060 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.236916 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.237597 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.247629 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-669bd"] Nov 22 11:01:28 crc kubenswrapper[4772]: W1122 11:01:28.257307 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe6c0ca_665a_41a1_a3ea_d47bb0d8c729.slice/crio-385ca193addceaccc344a7db82c8304bb8a406af210de1e4b137a967d048e48e WatchSource:0}: Error finding container 385ca193addceaccc344a7db82c8304bb8a406af210de1e4b137a967d048e48e: Status 404 returned error can't find the container with id 385ca193addceaccc344a7db82c8304bb8a406af210de1e4b137a967d048e48e Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.257757 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:28 crc kubenswrapper[4772]: W1122 11:01:28.268483 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda35672ad_5999_4891_972a_1461f9928b94.slice/crio-668b5fdf88e8782a5bfb9ec67cc7cfaf2550de069464b67c7d970733153434c9 WatchSource:0}: Error finding container 668b5fdf88e8782a5bfb9ec67cc7cfaf2550de069464b67c7d970733153434c9: Status 404 returned error can't find the container with id 668b5fdf88e8782a5bfb9ec67cc7cfaf2550de069464b67c7d970733153434c9 Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.269344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mktsv" event={"ID":"bc50fc65-ff5f-43d6-945b-d52d8535ccde","Type":"ContainerStarted","Data":"f6323674c0bc33c3f665bb6d419038e96eb64e58d1fa3bb2375973d8dd4adbe4"} Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.282358 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.333936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-config-data\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.335403 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-scripts\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.335467 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.335535 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbflh\" (UniqueName: \"kubernetes.io/projected/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-kube-api-access-rbflh\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.406158 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.437369 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-config-data\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.437454 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-scripts\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.437502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.437574 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbflh\" (UniqueName: \"kubernetes.io/projected/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-kube-api-access-rbflh\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.443418 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-scripts\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.450736 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-config-data\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.460749 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.467894 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbflh\" (UniqueName: \"kubernetes.io/projected/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-kube-api-access-rbflh\") pod \"nova-cell1-conductor-db-sync-669bd\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.477965 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.618836 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.703658 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-z97q7"] Nov 22 11:01:28 crc kubenswrapper[4772]: I1122 11:01:28.846296 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:01:28 crc kubenswrapper[4772]: W1122 11:01:28.861492 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd173c58_6b77_43c7_b3de_6b358dab5df2.slice/crio-ef2f6c56d074b107ca2af9d5f11fec0b2cbae7aeffd4d28b499c90faa46f299b WatchSource:0}: Error finding container ef2f6c56d074b107ca2af9d5f11fec0b2cbae7aeffd4d28b499c90faa46f299b: Status 404 returned error can't find the container with id ef2f6c56d074b107ca2af9d5f11fec0b2cbae7aeffd4d28b499c90faa46f299b Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.294848 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerID="52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18" exitCode=0 Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.295095 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" event={"ID":"7e0a890a-43a4-4264-8347-9c7421a368d5","Type":"ContainerDied","Data":"52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18"} Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.295901 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" event={"ID":"7e0a890a-43a4-4264-8347-9c7421a368d5","Type":"ContainerStarted","Data":"b52f8d8106fb0a21625d7b71580afc4a90c5fafb7217256aa108ac5b3df0dcbc"} Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.306515 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mktsv" event={"ID":"bc50fc65-ff5f-43d6-945b-d52d8535ccde","Type":"ContainerStarted","Data":"06728879ed3c7fff6c20631b75c9e7056b952121959c9a5bff71c4d710f58f9f"} Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.309269 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3a581f-ea17-4285-809c-e2a662710ed7","Type":"ContainerStarted","Data":"02857ad0b05a54e809f0a625bdc251849b4847cca3fe2641b8ec528ad94cdf3d"} Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.311468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd173c58-6b77-43c7-b3de-6b358dab5df2","Type":"ContainerStarted","Data":"ef2f6c56d074b107ca2af9d5f11fec0b2cbae7aeffd4d28b499c90faa46f299b"} Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.319445 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35672ad-5999-4891-972a-1461f9928b94","Type":"ContainerStarted","Data":"668b5fdf88e8782a5bfb9ec67cc7cfaf2550de069464b67c7d970733153434c9"} Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.323542 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729","Type":"ContainerStarted","Data":"385ca193addceaccc344a7db82c8304bb8a406af210de1e4b137a967d048e48e"} Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.344103 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mktsv" podStartSLOduration=2.344085259 podStartE2EDuration="2.344085259s" podCreationTimestamp="2025-11-22 11:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:29.33566266 +0000 UTC m=+1409.575107154" watchObservedRunningTime="2025-11-22 11:01:29.344085259 +0000 UTC m=+1409.583529753" Nov 22 11:01:29 crc kubenswrapper[4772]: I1122 11:01:29.488954 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-669bd"] Nov 22 11:01:29 crc kubenswrapper[4772]: W1122 11:01:29.520723 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f52d3ae_6917_4b9a_9f1e_533a35e47aaf.slice/crio-82e836b3a6638e27de9173243af781ec57b2cc4710a2ce17ad59b2121134529b WatchSource:0}: Error finding container 82e836b3a6638e27de9173243af781ec57b2cc4710a2ce17ad59b2121134529b: Status 404 returned error can't find the container with id 82e836b3a6638e27de9173243af781ec57b2cc4710a2ce17ad59b2121134529b Nov 22 11:01:30 crc kubenswrapper[4772]: I1122 11:01:30.340714 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-669bd" event={"ID":"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf","Type":"ContainerStarted","Data":"1deddd5e209d6dd7109a121b569853e5e7e08e0d376b5e08b2b32dcb070598a2"} Nov 22 11:01:30 crc kubenswrapper[4772]: I1122 11:01:30.341246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-669bd" event={"ID":"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf","Type":"ContainerStarted","Data":"82e836b3a6638e27de9173243af781ec57b2cc4710a2ce17ad59b2121134529b"} Nov 22 11:01:30 crc kubenswrapper[4772]: I1122 11:01:30.345083 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" event={"ID":"7e0a890a-43a4-4264-8347-9c7421a368d5","Type":"ContainerStarted","Data":"259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993"} Nov 22 11:01:30 crc kubenswrapper[4772]: I1122 11:01:30.345421 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:30 crc kubenswrapper[4772]: I1122 11:01:30.368880 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-669bd" podStartSLOduration=2.368863451 podStartE2EDuration="2.368863451s" podCreationTimestamp="2025-11-22 11:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:30.360237137 +0000 UTC m=+1410.599681631" watchObservedRunningTime="2025-11-22 11:01:30.368863451 +0000 UTC m=+1410.608307945" Nov 22 11:01:30 crc kubenswrapper[4772]: I1122 11:01:30.387190 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" podStartSLOduration=3.387172606 podStartE2EDuration="3.387172606s" podCreationTimestamp="2025-11-22 11:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:30.386802387 +0000 UTC m=+1410.626246881" watchObservedRunningTime="2025-11-22 11:01:30.387172606 +0000 UTC m=+1410.626617100" Nov 22 11:01:30 crc kubenswrapper[4772]: I1122 11:01:30.995131 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:01:31 crc kubenswrapper[4772]: I1122 11:01:31.004067 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:31 crc kubenswrapper[4772]: I1122 11:01:31.532812 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:01:31 crc kubenswrapper[4772]: I1122 11:01:31.533305 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:01:31 crc kubenswrapper[4772]: I1122 11:01:31.533356 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:01:31 crc kubenswrapper[4772]: I1122 11:01:31.534166 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95c963c954cabecad461116172cc9ea88ff81fed386024819164050fa8d713ce"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:01:31 crc kubenswrapper[4772]: I1122 11:01:31.534223 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://95c963c954cabecad461116172cc9ea88ff81fed386024819164050fa8d713ce" gracePeriod=600 Nov 22 11:01:32 crc kubenswrapper[4772]: I1122 11:01:32.367031 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="95c963c954cabecad461116172cc9ea88ff81fed386024819164050fa8d713ce" exitCode=0 Nov 22 11:01:32 crc kubenswrapper[4772]: I1122 11:01:32.367084 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"95c963c954cabecad461116172cc9ea88ff81fed386024819164050fa8d713ce"} Nov 22 11:01:32 crc kubenswrapper[4772]: I1122 11:01:32.367119 4772 scope.go:117] "RemoveContainer" containerID="a69ae46ae795f0c272467d20a88d4d3efbdd2e5ec86370c20bc8c57f8ee1677e" Nov 22 11:01:33 crc kubenswrapper[4772]: I1122 11:01:33.876969 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:01:33 crc kubenswrapper[4772]: I1122 11:01:33.877795 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" containerName="kube-state-metrics" containerID="cri-o://dabf2af73310ade5ca6d91856e667382ab1c35581ac6b0df456c91a5469c6edd" gracePeriod=30 Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.425662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.438283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3a581f-ea17-4285-809c-e2a662710ed7","Type":"ContainerStarted","Data":"623342afc166e1dec739098b6e356909b986be86898509dba8e02538790da9f9"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.438332 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3a581f-ea17-4285-809c-e2a662710ed7","Type":"ContainerStarted","Data":"7ea89aa5bd7af8922700d4c912080a19e9ca6c45f2f0e644575dd4395bce3c18"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.438470 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-log" containerID="cri-o://7ea89aa5bd7af8922700d4c912080a19e9ca6c45f2f0e644575dd4395bce3c18" gracePeriod=30 Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.438749 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-metadata" containerID="cri-o://623342afc166e1dec739098b6e356909b986be86898509dba8e02538790da9f9" gracePeriod=30 Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.447200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd173c58-6b77-43c7-b3de-6b358dab5df2","Type":"ContainerStarted","Data":"24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.447352 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bd173c58-6b77-43c7-b3de-6b358dab5df2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4" gracePeriod=30 Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.495757 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35672ad-5999-4891-972a-1461f9928b94","Type":"ContainerStarted","Data":"cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.495802 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35672ad-5999-4891-972a-1461f9928b94","Type":"ContainerStarted","Data":"e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.497756 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.626565314 podStartE2EDuration="7.497745669s" podCreationTimestamp="2025-11-22 11:01:27 +0000 UTC" firstStartedPulling="2025-11-22 11:01:28.417470324 +0000 UTC m=+1408.656914818" lastFinishedPulling="2025-11-22 11:01:33.288650679 +0000 UTC m=+1413.528095173" observedRunningTime="2025-11-22 11:01:34.479395273 +0000 UTC m=+1414.718839777" watchObservedRunningTime="2025-11-22 11:01:34.497745669 +0000 UTC m=+1414.737190163" Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.502264 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729","Type":"ContainerStarted","Data":"daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.507618 4772 generic.go:334] "Generic (PLEG): container finished" podID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" containerID="dabf2af73310ade5ca6d91856e667382ab1c35581ac6b0df456c91a5469c6edd" exitCode=2 Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.507698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd","Type":"ContainerDied","Data":"dabf2af73310ade5ca6d91856e667382ab1c35581ac6b0df456c91a5469c6edd"} Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.512821 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.096846725 podStartE2EDuration="7.512809493s" podCreationTimestamp="2025-11-22 11:01:27 +0000 UTC" firstStartedPulling="2025-11-22 11:01:28.883204002 +0000 UTC m=+1409.122648496" lastFinishedPulling="2025-11-22 11:01:33.29916677 +0000 UTC m=+1413.538611264" observedRunningTime="2025-11-22 11:01:34.494919269 +0000 UTC m=+1414.734363753" watchObservedRunningTime="2025-11-22 11:01:34.512809493 +0000 UTC m=+1414.752253987" Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.533489 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.517988548 podStartE2EDuration="7.533473066s" podCreationTimestamp="2025-11-22 11:01:27 +0000 UTC" firstStartedPulling="2025-11-22 11:01:28.273905869 +0000 UTC m=+1408.513350373" lastFinishedPulling="2025-11-22 11:01:33.289390397 +0000 UTC m=+1413.528834891" observedRunningTime="2025-11-22 11:01:34.519766666 +0000 UTC m=+1414.759211160" watchObservedRunningTime="2025-11-22 11:01:34.533473066 +0000 UTC m=+1414.772917560" Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.541590 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.546499 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.539301278 podStartE2EDuration="7.54648113s" podCreationTimestamp="2025-11-22 11:01:27 +0000 UTC" firstStartedPulling="2025-11-22 11:01:28.268281919 +0000 UTC m=+1408.507726413" lastFinishedPulling="2025-11-22 11:01:33.275461781 +0000 UTC m=+1413.514906265" observedRunningTime="2025-11-22 11:01:34.537417334 +0000 UTC m=+1414.776861828" watchObservedRunningTime="2025-11-22 11:01:34.54648113 +0000 UTC m=+1414.785925624" Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.581649 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnrjf\" (UniqueName: \"kubernetes.io/projected/f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd-kube-api-access-pnrjf\") pod \"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd\" (UID: \"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd\") " Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.613218 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd-kube-api-access-pnrjf" (OuterVolumeSpecName: "kube-api-access-pnrjf") pod "f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" (UID: "f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd"). InnerVolumeSpecName "kube-api-access-pnrjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:34 crc kubenswrapper[4772]: I1122 11:01:34.683711 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnrjf\" (UniqueName: \"kubernetes.io/projected/f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd-kube-api-access-pnrjf\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.529266 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd","Type":"ContainerDied","Data":"e7c7365e9d44ae7aa920b9cd504a77ed9634760a8b4498e92a5b939ff58df2c5"} Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.529603 4772 scope.go:117] "RemoveContainer" containerID="dabf2af73310ade5ca6d91856e667382ab1c35581ac6b0df456c91a5469c6edd" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.529748 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.537464 4772 generic.go:334] "Generic (PLEG): container finished" podID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerID="623342afc166e1dec739098b6e356909b986be86898509dba8e02538790da9f9" exitCode=0 Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.537494 4772 generic.go:334] "Generic (PLEG): container finished" podID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerID="7ea89aa5bd7af8922700d4c912080a19e9ca6c45f2f0e644575dd4395bce3c18" exitCode=143 Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.538129 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3a581f-ea17-4285-809c-e2a662710ed7","Type":"ContainerDied","Data":"623342afc166e1dec739098b6e356909b986be86898509dba8e02538790da9f9"} Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.538196 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3a581f-ea17-4285-809c-e2a662710ed7","Type":"ContainerDied","Data":"7ea89aa5bd7af8922700d4c912080a19e9ca6c45f2f0e644575dd4395bce3c18"} Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.592283 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.640181 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.658249 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:01:35 crc kubenswrapper[4772]: E1122 11:01:35.660349 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" containerName="kube-state-metrics" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.660590 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" containerName="kube-state-metrics" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.661683 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" containerName="kube-state-metrics" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.664992 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.669481 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.669825 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.675443 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.709211 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.709267 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.709315 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87khb\" (UniqueName: \"kubernetes.io/projected/ddb4a824-3a8a-4287-b206-94832099e15b-kube-api-access-87khb\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.709379 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.810705 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.810751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.810796 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87khb\" (UniqueName: \"kubernetes.io/projected/ddb4a824-3a8a-4287-b206-94832099e15b-kube-api-access-87khb\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.810837 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.817450 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.819620 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.820387 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.830972 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87khb\" (UniqueName: \"kubernetes.io/projected/ddb4a824-3a8a-4287-b206-94832099e15b-kube-api-access-87khb\") pod \"kube-state-metrics-0\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " pod="openstack/kube-state-metrics-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.925094 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:35 crc kubenswrapper[4772]: I1122 11:01:35.999550 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.014068 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-config-data\") pod \"2e3a581f-ea17-4285-809c-e2a662710ed7\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.014213 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzhdm\" (UniqueName: \"kubernetes.io/projected/2e3a581f-ea17-4285-809c-e2a662710ed7-kube-api-access-xzhdm\") pod \"2e3a581f-ea17-4285-809c-e2a662710ed7\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.014236 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-combined-ca-bundle\") pod \"2e3a581f-ea17-4285-809c-e2a662710ed7\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.014296 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3a581f-ea17-4285-809c-e2a662710ed7-logs\") pod \"2e3a581f-ea17-4285-809c-e2a662710ed7\" (UID: \"2e3a581f-ea17-4285-809c-e2a662710ed7\") " Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.014646 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3a581f-ea17-4285-809c-e2a662710ed7-logs" (OuterVolumeSpecName: "logs") pod "2e3a581f-ea17-4285-809c-e2a662710ed7" (UID: "2e3a581f-ea17-4285-809c-e2a662710ed7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.015055 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3a581f-ea17-4285-809c-e2a662710ed7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.019619 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3a581f-ea17-4285-809c-e2a662710ed7-kube-api-access-xzhdm" (OuterVolumeSpecName: "kube-api-access-xzhdm") pod "2e3a581f-ea17-4285-809c-e2a662710ed7" (UID: "2e3a581f-ea17-4285-809c-e2a662710ed7"). InnerVolumeSpecName "kube-api-access-xzhdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.048156 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-config-data" (OuterVolumeSpecName: "config-data") pod "2e3a581f-ea17-4285-809c-e2a662710ed7" (UID: "2e3a581f-ea17-4285-809c-e2a662710ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.063034 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e3a581f-ea17-4285-809c-e2a662710ed7" (UID: "2e3a581f-ea17-4285-809c-e2a662710ed7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.116856 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.116903 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzhdm\" (UniqueName: \"kubernetes.io/projected/2e3a581f-ea17-4285-809c-e2a662710ed7-kube-api-access-xzhdm\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.116917 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3a581f-ea17-4285-809c-e2a662710ed7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.431404 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.431970 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-central-agent" containerID="cri-o://46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f" gracePeriod=30 Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.432132 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="proxy-httpd" containerID="cri-o://766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a" gracePeriod=30 Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.432176 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="sg-core" containerID="cri-o://988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41" gracePeriod=30 Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.432208 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-notification-agent" containerID="cri-o://883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4" gracePeriod=30 Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.552252 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3a581f-ea17-4285-809c-e2a662710ed7","Type":"ContainerDied","Data":"02857ad0b05a54e809f0a625bdc251849b4847cca3fe2641b8ec528ad94cdf3d"} Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.553116 4772 scope.go:117] "RemoveContainer" containerID="623342afc166e1dec739098b6e356909b986be86898509dba8e02538790da9f9" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.552645 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.608359 4772 scope.go:117] "RemoveContainer" containerID="7ea89aa5bd7af8922700d4c912080a19e9ca6c45f2f0e644575dd4395bce3c18" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.619474 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.638334 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.649801 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.662135 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:36 crc kubenswrapper[4772]: E1122 11:01:36.666039 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-metadata" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.666077 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-metadata" Nov 22 11:01:36 crc kubenswrapper[4772]: E1122 11:01:36.666114 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-log" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.666123 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-log" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.670720 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-metadata" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.670764 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" containerName="nova-metadata-log" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.675312 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.675426 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.679521 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.679709 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.729423 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.729538 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqkm\" (UniqueName: \"kubernetes.io/projected/a56e4f18-2d41-4fb4-88a3-8889d928298d-kube-api-access-zxqkm\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.729578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.729909 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a56e4f18-2d41-4fb4-88a3-8889d928298d-logs\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.730026 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-config-data\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.831226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxqkm\" (UniqueName: \"kubernetes.io/projected/a56e4f18-2d41-4fb4-88a3-8889d928298d-kube-api-access-zxqkm\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.831282 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.831357 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a56e4f18-2d41-4fb4-88a3-8889d928298d-logs\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.831393 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-config-data\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.831435 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.833019 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a56e4f18-2d41-4fb4-88a3-8889d928298d-logs\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.837615 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-config-data\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.839362 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.840712 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:36 crc kubenswrapper[4772]: I1122 11:01:36.849853 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxqkm\" (UniqueName: \"kubernetes.io/projected/a56e4f18-2d41-4fb4-88a3-8889d928298d-kube-api-access-zxqkm\") pod \"nova-metadata-0\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " pod="openstack/nova-metadata-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.002796 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.425156 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3a581f-ea17-4285-809c-e2a662710ed7" path="/var/lib/kubelet/pods/2e3a581f-ea17-4285-809c-e2a662710ed7/volumes" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.426748 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd" path="/var/lib/kubelet/pods/f92e46ad-ac39-4ede-9cd6-6c2fcdf71efd/volumes" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.460623 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:37 crc kubenswrapper[4772]: W1122 11:01:37.467409 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda56e4f18_2d41_4fb4_88a3_8889d928298d.slice/crio-9ec193d8957ef1dc852365dde9453dcc629da6513cbf5842c6c429e6c28e6a1b WatchSource:0}: Error finding container 9ec193d8957ef1dc852365dde9453dcc629da6513cbf5842c6c429e6c28e6a1b: Status 404 returned error can't find the container with id 9ec193d8957ef1dc852365dde9453dcc629da6513cbf5842c6c429e6c28e6a1b Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.538957 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.539007 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.567832 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddb4a824-3a8a-4287-b206-94832099e15b","Type":"ContainerStarted","Data":"0b9ac1c20deb88561ffa4d4aa22d3d7b705dfba4a955494934d74483e7d263d6"} Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.567879 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddb4a824-3a8a-4287-b206-94832099e15b","Type":"ContainerStarted","Data":"9397793039e1f460d83c1c72ed6be565eb164d6fc94e599ab426a93c8cb5de19"} Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.568159 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.571742 4772 generic.go:334] "Generic (PLEG): container finished" podID="492839a5-207f-4770-9335-1117c1c33fe7" containerID="766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a" exitCode=0 Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.571769 4772 generic.go:334] "Generic (PLEG): container finished" podID="492839a5-207f-4770-9335-1117c1c33fe7" containerID="988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41" exitCode=2 Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.571778 4772 generic.go:334] "Generic (PLEG): container finished" podID="492839a5-207f-4770-9335-1117c1c33fe7" containerID="46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f" exitCode=0 Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.571825 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerDied","Data":"766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a"} Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.571849 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerDied","Data":"988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41"} Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.571862 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerDied","Data":"46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f"} Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.574310 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a56e4f18-2d41-4fb4-88a3-8889d928298d","Type":"ContainerStarted","Data":"9ec193d8957ef1dc852365dde9453dcc629da6513cbf5842c6c429e6c28e6a1b"} Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.588387 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.04847264 podStartE2EDuration="2.58836503s" podCreationTimestamp="2025-11-22 11:01:35 +0000 UTC" firstStartedPulling="2025-11-22 11:01:36.629860173 +0000 UTC m=+1416.869304667" lastFinishedPulling="2025-11-22 11:01:37.169752563 +0000 UTC m=+1417.409197057" observedRunningTime="2025-11-22 11:01:37.586488933 +0000 UTC m=+1417.825933447" watchObservedRunningTime="2025-11-22 11:01:37.58836503 +0000 UTC m=+1417.827809524" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.595896 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.600101 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.600152 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.632410 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 11:01:37 crc kubenswrapper[4772]: I1122 11:01:37.999527 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.024186 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.089291 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-klkn5"] Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.089560 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" podUID="545348ad-f752-4463-96a2-353ba4ac1b57" containerName="dnsmasq-dns" containerID="cri-o://537f96f394f4070c28465870acd25ae88cbf9da3ae1de9169af161878a5d031f" gracePeriod=10 Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.591587 4772 generic.go:334] "Generic (PLEG): container finished" podID="545348ad-f752-4463-96a2-353ba4ac1b57" containerID="537f96f394f4070c28465870acd25ae88cbf9da3ae1de9169af161878a5d031f" exitCode=0 Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.591663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" event={"ID":"545348ad-f752-4463-96a2-353ba4ac1b57","Type":"ContainerDied","Data":"537f96f394f4070c28465870acd25ae88cbf9da3ae1de9169af161878a5d031f"} Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.593495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a56e4f18-2d41-4fb4-88a3-8889d928298d","Type":"ContainerStarted","Data":"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582"} Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.593538 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a56e4f18-2d41-4fb4-88a3-8889d928298d","Type":"ContainerStarted","Data":"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e"} Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.595347 4772 generic.go:334] "Generic (PLEG): container finished" podID="bc50fc65-ff5f-43d6-945b-d52d8535ccde" containerID="06728879ed3c7fff6c20631b75c9e7056b952121959c9a5bff71c4d710f58f9f" exitCode=0 Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.595430 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mktsv" event={"ID":"bc50fc65-ff5f-43d6-945b-d52d8535ccde","Type":"ContainerDied","Data":"06728879ed3c7fff6c20631b75c9e7056b952121959c9a5bff71c4d710f58f9f"} Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.614565 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.614544646 podStartE2EDuration="2.614544646s" podCreationTimestamp="2025-11-22 11:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:38.610735712 +0000 UTC m=+1418.850180216" watchObservedRunningTime="2025-11-22 11:01:38.614544646 +0000 UTC m=+1418.853989150" Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.682398 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 11:01:38 crc kubenswrapper[4772]: I1122 11:01:38.682409 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.104130 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.188411 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-nb\") pod \"545348ad-f752-4463-96a2-353ba4ac1b57\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.189010 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-config\") pod \"545348ad-f752-4463-96a2-353ba4ac1b57\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.189153 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-sb\") pod \"545348ad-f752-4463-96a2-353ba4ac1b57\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.189284 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-svc\") pod \"545348ad-f752-4463-96a2-353ba4ac1b57\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.189382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmfwb\" (UniqueName: \"kubernetes.io/projected/545348ad-f752-4463-96a2-353ba4ac1b57-kube-api-access-lmfwb\") pod \"545348ad-f752-4463-96a2-353ba4ac1b57\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.189579 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-swift-storage-0\") pod \"545348ad-f752-4463-96a2-353ba4ac1b57\" (UID: \"545348ad-f752-4463-96a2-353ba4ac1b57\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.199237 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545348ad-f752-4463-96a2-353ba4ac1b57-kube-api-access-lmfwb" (OuterVolumeSpecName: "kube-api-access-lmfwb") pod "545348ad-f752-4463-96a2-353ba4ac1b57" (UID: "545348ad-f752-4463-96a2-353ba4ac1b57"). InnerVolumeSpecName "kube-api-access-lmfwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.246894 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-config" (OuterVolumeSpecName: "config") pod "545348ad-f752-4463-96a2-353ba4ac1b57" (UID: "545348ad-f752-4463-96a2-353ba4ac1b57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.248446 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "545348ad-f752-4463-96a2-353ba4ac1b57" (UID: "545348ad-f752-4463-96a2-353ba4ac1b57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.249739 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "545348ad-f752-4463-96a2-353ba4ac1b57" (UID: "545348ad-f752-4463-96a2-353ba4ac1b57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.249868 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "545348ad-f752-4463-96a2-353ba4ac1b57" (UID: "545348ad-f752-4463-96a2-353ba4ac1b57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.258607 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "545348ad-f752-4463-96a2-353ba4ac1b57" (UID: "545348ad-f752-4463-96a2-353ba4ac1b57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.292951 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.293000 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.293011 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.293020 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.293107 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmfwb\" (UniqueName: \"kubernetes.io/projected/545348ad-f752-4463-96a2-353ba4ac1b57-kube-api-access-lmfwb\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.293120 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545348ad-f752-4463-96a2-353ba4ac1b57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.521420 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.599822 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-log-httpd\") pod \"492839a5-207f-4770-9335-1117c1c33fe7\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.599911 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-combined-ca-bundle\") pod \"492839a5-207f-4770-9335-1117c1c33fe7\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.600026 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-config-data\") pod \"492839a5-207f-4770-9335-1117c1c33fe7\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.600069 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-sg-core-conf-yaml\") pod \"492839a5-207f-4770-9335-1117c1c33fe7\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.600109 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-run-httpd\") pod \"492839a5-207f-4770-9335-1117c1c33fe7\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.600184 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-scripts\") pod \"492839a5-207f-4770-9335-1117c1c33fe7\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.600223 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qqps\" (UniqueName: \"kubernetes.io/projected/492839a5-207f-4770-9335-1117c1c33fe7-kube-api-access-8qqps\") pod \"492839a5-207f-4770-9335-1117c1c33fe7\" (UID: \"492839a5-207f-4770-9335-1117c1c33fe7\") " Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.601210 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "492839a5-207f-4770-9335-1117c1c33fe7" (UID: "492839a5-207f-4770-9335-1117c1c33fe7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.601638 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "492839a5-207f-4770-9335-1117c1c33fe7" (UID: "492839a5-207f-4770-9335-1117c1c33fe7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.604318 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492839a5-207f-4770-9335-1117c1c33fe7-kube-api-access-8qqps" (OuterVolumeSpecName: "kube-api-access-8qqps") pod "492839a5-207f-4770-9335-1117c1c33fe7" (UID: "492839a5-207f-4770-9335-1117c1c33fe7"). InnerVolumeSpecName "kube-api-access-8qqps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.605292 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-scripts" (OuterVolumeSpecName: "scripts") pod "492839a5-207f-4770-9335-1117c1c33fe7" (UID: "492839a5-207f-4770-9335-1117c1c33fe7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.631875 4772 generic.go:334] "Generic (PLEG): container finished" podID="492839a5-207f-4770-9335-1117c1c33fe7" containerID="883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4" exitCode=0 Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.632010 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.632740 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerDied","Data":"883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4"} Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.632772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"492839a5-207f-4770-9335-1117c1c33fe7","Type":"ContainerDied","Data":"1d4a98ffaa9b021106018898120ea752451677dcb45f0c6f8e2be16d8f1b2b6f"} Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.632791 4772 scope.go:117] "RemoveContainer" containerID="766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.636888 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" event={"ID":"545348ad-f752-4463-96a2-353ba4ac1b57","Type":"ContainerDied","Data":"0781cee4db5da21096ab2056d1f300ac0635d04141648c3059e28efc3d82f3cf"} Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.637097 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-klkn5" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.657729 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "492839a5-207f-4770-9335-1117c1c33fe7" (UID: "492839a5-207f-4770-9335-1117c1c33fe7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.678860 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-klkn5"] Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.685540 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-klkn5"] Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.697538 4772 scope.go:117] "RemoveContainer" containerID="988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.702511 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.702532 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.702542 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.702551 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qqps\" (UniqueName: \"kubernetes.io/projected/492839a5-207f-4770-9335-1117c1c33fe7-kube-api-access-8qqps\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.702562 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/492839a5-207f-4770-9335-1117c1c33fe7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.724799 4772 scope.go:117] "RemoveContainer" containerID="883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.728745 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-config-data" (OuterVolumeSpecName: "config-data") pod "492839a5-207f-4770-9335-1117c1c33fe7" (UID: "492839a5-207f-4770-9335-1117c1c33fe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.743137 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "492839a5-207f-4770-9335-1117c1c33fe7" (UID: "492839a5-207f-4770-9335-1117c1c33fe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.752496 4772 scope.go:117] "RemoveContainer" containerID="46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.788654 4772 scope.go:117] "RemoveContainer" containerID="766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a" Nov 22 11:01:39 crc kubenswrapper[4772]: E1122 11:01:39.789922 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a\": container with ID starting with 766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a not found: ID does not exist" containerID="766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.789967 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a"} err="failed to get container status \"766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a\": rpc error: code = NotFound desc = could not find container \"766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a\": container with ID starting with 766f6ebeab638b9c3a70b74ad8f6b9b78186ca6e10add33ac371e60ae666632a not found: ID does not exist" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.789994 4772 scope.go:117] "RemoveContainer" containerID="988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41" Nov 22 11:01:39 crc kubenswrapper[4772]: E1122 11:01:39.791280 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41\": container with ID starting with 988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41 not found: ID does not exist" containerID="988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.791305 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41"} err="failed to get container status \"988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41\": rpc error: code = NotFound desc = could not find container \"988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41\": container with ID starting with 988a07756a369a57d58f99da792b3741eaaa1df45c237b422e107b7ef7dc2b41 not found: ID does not exist" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.791320 4772 scope.go:117] "RemoveContainer" containerID="883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4" Nov 22 11:01:39 crc kubenswrapper[4772]: E1122 11:01:39.796246 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4\": container with ID starting with 883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4 not found: ID does not exist" containerID="883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.796288 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4"} err="failed to get container status \"883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4\": rpc error: code = NotFound desc = could not find container \"883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4\": container with ID starting with 883ea789b9af8213e4ecd7cd6aeeefb1bfa50fbe0fa63ec35b57a545c6f246a4 not found: ID does not exist" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.796313 4772 scope.go:117] "RemoveContainer" containerID="46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f" Nov 22 11:01:39 crc kubenswrapper[4772]: E1122 11:01:39.797260 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f\": container with ID starting with 46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f not found: ID does not exist" containerID="46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.797298 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f"} err="failed to get container status \"46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f\": rpc error: code = NotFound desc = could not find container \"46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f\": container with ID starting with 46fe9527cb5db65e1060d147cf125fce69eaf4aa4e546d3e5b290376b707615f not found: ID does not exist" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.797322 4772 scope.go:117] "RemoveContainer" containerID="537f96f394f4070c28465870acd25ae88cbf9da3ae1de9169af161878a5d031f" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.805620 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.805662 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492839a5-207f-4770-9335-1117c1c33fe7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.839232 4772 scope.go:117] "RemoveContainer" containerID="83ff37bc991a4bb6d08b0c04e5b797996448a71a94f218c0477da6634e261586" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.933735 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.974104 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:01:39 crc kubenswrapper[4772]: I1122 11:01:39.990206 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000286 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:01:40 crc kubenswrapper[4772]: E1122 11:01:40.000784 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="proxy-httpd" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000810 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="proxy-httpd" Nov 22 11:01:40 crc kubenswrapper[4772]: E1122 11:01:40.000828 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="sg-core" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000836 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="sg-core" Nov 22 11:01:40 crc kubenswrapper[4772]: E1122 11:01:40.000857 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545348ad-f752-4463-96a2-353ba4ac1b57" containerName="init" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000864 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="545348ad-f752-4463-96a2-353ba4ac1b57" containerName="init" Nov 22 11:01:40 crc kubenswrapper[4772]: E1122 11:01:40.000885 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-central-agent" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000893 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-central-agent" Nov 22 11:01:40 crc kubenswrapper[4772]: E1122 11:01:40.000904 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-notification-agent" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000911 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-notification-agent" Nov 22 11:01:40 crc kubenswrapper[4772]: E1122 11:01:40.000939 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc50fc65-ff5f-43d6-945b-d52d8535ccde" containerName="nova-manage" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000946 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc50fc65-ff5f-43d6-945b-d52d8535ccde" containerName="nova-manage" Nov 22 11:01:40 crc kubenswrapper[4772]: E1122 11:01:40.000959 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545348ad-f752-4463-96a2-353ba4ac1b57" containerName="dnsmasq-dns" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.000966 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="545348ad-f752-4463-96a2-353ba4ac1b57" containerName="dnsmasq-dns" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.002787 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-central-agent" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.002818 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="ceilometer-notification-agent" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.002840 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="sg-core" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.002853 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="492839a5-207f-4770-9335-1117c1c33fe7" containerName="proxy-httpd" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.002867 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc50fc65-ff5f-43d6-945b-d52d8535ccde" containerName="nova-manage" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.002892 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="545348ad-f752-4463-96a2-353ba4ac1b57" containerName="dnsmasq-dns" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.005210 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.008628 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-config-data\") pod \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.008804 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-scripts\") pod \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.008949 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8grh\" (UniqueName: \"kubernetes.io/projected/bc50fc65-ff5f-43d6-945b-d52d8535ccde-kube-api-access-n8grh\") pod \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.008985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-combined-ca-bundle\") pod \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\" (UID: \"bc50fc65-ff5f-43d6-945b-d52d8535ccde\") " Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.015114 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.017252 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc50fc65-ff5f-43d6-945b-d52d8535ccde-kube-api-access-n8grh" (OuterVolumeSpecName: "kube-api-access-n8grh") pod "bc50fc65-ff5f-43d6-945b-d52d8535ccde" (UID: "bc50fc65-ff5f-43d6-945b-d52d8535ccde"). InnerVolumeSpecName "kube-api-access-n8grh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.017835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-scripts" (OuterVolumeSpecName: "scripts") pod "bc50fc65-ff5f-43d6-945b-d52d8535ccde" (UID: "bc50fc65-ff5f-43d6-945b-d52d8535ccde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.026392 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.028927 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.029143 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.054320 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-config-data" (OuterVolumeSpecName: "config-data") pod "bc50fc65-ff5f-43d6-945b-d52d8535ccde" (UID: "bc50fc65-ff5f-43d6-945b-d52d8535ccde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.056865 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc50fc65-ff5f-43d6-945b-d52d8535ccde" (UID: "bc50fc65-ff5f-43d6-945b-d52d8535ccde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.111952 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.112078 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.112096 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.112382 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfsjx\" (UniqueName: \"kubernetes.io/projected/851484de-f073-4750-85c4-cdcfda340b52-kube-api-access-cfsjx\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.112533 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-run-httpd\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.112623 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-scripts\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.112856 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.112908 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-log-httpd\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.113016 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.113038 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8grh\" (UniqueName: \"kubernetes.io/projected/bc50fc65-ff5f-43d6-945b-d52d8535ccde-kube-api-access-n8grh\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.113069 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.113082 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc50fc65-ff5f-43d6-945b-d52d8535ccde-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214388 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfsjx\" (UniqueName: \"kubernetes.io/projected/851484de-f073-4750-85c4-cdcfda340b52-kube-api-access-cfsjx\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214445 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-run-httpd\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214483 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-scripts\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-log-httpd\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214628 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214678 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214694 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.214983 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-run-httpd\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.215093 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-log-httpd\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.218441 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.218490 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.219357 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-scripts\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.224745 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.230397 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.233345 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfsjx\" (UniqueName: \"kubernetes.io/projected/851484de-f073-4750-85c4-cdcfda340b52-kube-api-access-cfsjx\") pod \"ceilometer-0\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.446859 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.672118 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mktsv" event={"ID":"bc50fc65-ff5f-43d6-945b-d52d8535ccde","Type":"ContainerDied","Data":"f6323674c0bc33c3f665bb6d419038e96eb64e58d1fa3bb2375973d8dd4adbe4"} Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.672641 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6323674c0bc33c3f665bb6d419038e96eb64e58d1fa3bb2375973d8dd4adbe4" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.672165 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mktsv" Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.835737 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.836229 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-api" containerID="cri-o://cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b" gracePeriod=30 Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.836854 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-log" containerID="cri-o://e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f" gracePeriod=30 Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.855339 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.855643 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" containerName="nova-scheduler-scheduler" containerID="cri-o://daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" gracePeriod=30 Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.868903 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.869158 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-log" containerID="cri-o://50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e" gracePeriod=30 Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.869635 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-metadata" containerID="cri-o://0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582" gracePeriod=30 Nov 22 11:01:40 crc kubenswrapper[4772]: I1122 11:01:40.925626 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:01:40 crc kubenswrapper[4772]: W1122 11:01:40.926538 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod851484de_f073_4750_85c4_cdcfda340b52.slice/crio-d1909cf3cd8c75a73299edffec0cac167a960a485115a50fc300c7595203e7d1 WatchSource:0}: Error finding container d1909cf3cd8c75a73299edffec0cac167a960a485115a50fc300c7595203e7d1: Status 404 returned error can't find the container with id d1909cf3cd8c75a73299edffec0cac167a960a485115a50fc300c7595203e7d1 Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.424764 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492839a5-207f-4770-9335-1117c1c33fe7" path="/var/lib/kubelet/pods/492839a5-207f-4770-9335-1117c1c33fe7/volumes" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.425763 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545348ad-f752-4463-96a2-353ba4ac1b57" path="/var/lib/kubelet/pods/545348ad-f752-4463-96a2-353ba4ac1b57/volumes" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.505393 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.553002 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a56e4f18-2d41-4fb4-88a3-8889d928298d-logs\") pod \"a56e4f18-2d41-4fb4-88a3-8889d928298d\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.553081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-nova-metadata-tls-certs\") pod \"a56e4f18-2d41-4fb4-88a3-8889d928298d\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.553145 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-combined-ca-bundle\") pod \"a56e4f18-2d41-4fb4-88a3-8889d928298d\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.553173 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxqkm\" (UniqueName: \"kubernetes.io/projected/a56e4f18-2d41-4fb4-88a3-8889d928298d-kube-api-access-zxqkm\") pod \"a56e4f18-2d41-4fb4-88a3-8889d928298d\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.553205 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-config-data\") pod \"a56e4f18-2d41-4fb4-88a3-8889d928298d\" (UID: \"a56e4f18-2d41-4fb4-88a3-8889d928298d\") " Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.565878 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56e4f18-2d41-4fb4-88a3-8889d928298d-kube-api-access-zxqkm" (OuterVolumeSpecName: "kube-api-access-zxqkm") pod "a56e4f18-2d41-4fb4-88a3-8889d928298d" (UID: "a56e4f18-2d41-4fb4-88a3-8889d928298d"). InnerVolumeSpecName "kube-api-access-zxqkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.570156 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a56e4f18-2d41-4fb4-88a3-8889d928298d-logs" (OuterVolumeSpecName: "logs") pod "a56e4f18-2d41-4fb4-88a3-8889d928298d" (UID: "a56e4f18-2d41-4fb4-88a3-8889d928298d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.601724 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-config-data" (OuterVolumeSpecName: "config-data") pod "a56e4f18-2d41-4fb4-88a3-8889d928298d" (UID: "a56e4f18-2d41-4fb4-88a3-8889d928298d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.636783 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a56e4f18-2d41-4fb4-88a3-8889d928298d" (UID: "a56e4f18-2d41-4fb4-88a3-8889d928298d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.654913 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a56e4f18-2d41-4fb4-88a3-8889d928298d-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.654948 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.654962 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxqkm\" (UniqueName: \"kubernetes.io/projected/a56e4f18-2d41-4fb4-88a3-8889d928298d-kube-api-access-zxqkm\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.654971 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.655676 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a56e4f18-2d41-4fb4-88a3-8889d928298d" (UID: "a56e4f18-2d41-4fb4-88a3-8889d928298d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.691350 4772 generic.go:334] "Generic (PLEG): container finished" podID="a35672ad-5999-4891-972a-1461f9928b94" containerID="e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f" exitCode=143 Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.691422 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35672ad-5999-4891-972a-1461f9928b94","Type":"ContainerDied","Data":"e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f"} Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.695078 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerStarted","Data":"d1909cf3cd8c75a73299edffec0cac167a960a485115a50fc300c7595203e7d1"} Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.698655 4772 generic.go:334] "Generic (PLEG): container finished" podID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerID="0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582" exitCode=0 Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.698681 4772 generic.go:334] "Generic (PLEG): container finished" podID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerID="50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e" exitCode=143 Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.698697 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a56e4f18-2d41-4fb4-88a3-8889d928298d","Type":"ContainerDied","Data":"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582"} Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.698707 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.698736 4772 scope.go:117] "RemoveContainer" containerID="0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.698724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a56e4f18-2d41-4fb4-88a3-8889d928298d","Type":"ContainerDied","Data":"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e"} Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.698847 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a56e4f18-2d41-4fb4-88a3-8889d928298d","Type":"ContainerDied","Data":"9ec193d8957ef1dc852365dde9453dcc629da6513cbf5842c6c429e6c28e6a1b"} Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.736729 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.747746 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.757255 4772 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56e4f18-2d41-4fb4-88a3-8889d928298d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.765036 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:41 crc kubenswrapper[4772]: E1122 11:01:41.765622 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-metadata" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.765647 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-metadata" Nov 22 11:01:41 crc kubenswrapper[4772]: E1122 11:01:41.765672 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-log" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.765681 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-log" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.765904 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-log" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.765928 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" containerName="nova-metadata-metadata" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.769877 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.779609 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.780816 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.789301 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.801716 4772 scope.go:117] "RemoveContainer" containerID="50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.860218 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgbw\" (UniqueName: \"kubernetes.io/projected/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-kube-api-access-bsgbw\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.860273 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.860315 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-logs\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.860358 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-config-data\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.860398 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.946664 4772 scope.go:117] "RemoveContainer" containerID="0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582" Nov 22 11:01:41 crc kubenswrapper[4772]: E1122 11:01:41.947779 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582\": container with ID starting with 0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582 not found: ID does not exist" containerID="0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.947825 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582"} err="failed to get container status \"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582\": rpc error: code = NotFound desc = could not find container \"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582\": container with ID starting with 0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582 not found: ID does not exist" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.947845 4772 scope.go:117] "RemoveContainer" containerID="50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e" Nov 22 11:01:41 crc kubenswrapper[4772]: E1122 11:01:41.948124 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e\": container with ID starting with 50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e not found: ID does not exist" containerID="50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.948149 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e"} err="failed to get container status \"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e\": rpc error: code = NotFound desc = could not find container \"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e\": container with ID starting with 50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e not found: ID does not exist" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.948164 4772 scope.go:117] "RemoveContainer" containerID="0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.948363 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582"} err="failed to get container status \"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582\": rpc error: code = NotFound desc = could not find container \"0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582\": container with ID starting with 0379c1d882aa3f5655f13d2212a697bd72ca7cef524f5e73292c1207997b8582 not found: ID does not exist" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.948380 4772 scope.go:117] "RemoveContainer" containerID="50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.948567 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e"} err="failed to get container status \"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e\": rpc error: code = NotFound desc = could not find container \"50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e\": container with ID starting with 50df8de7bdfad10213c109d8a56f3941198d42e9a426dbb23a2f485bdae4862e not found: ID does not exist" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.961851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-config-data\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.961924 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.962072 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgbw\" (UniqueName: \"kubernetes.io/projected/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-kube-api-access-bsgbw\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.962102 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.962136 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-logs\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.962553 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-logs\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.966251 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.966373 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.966537 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-config-data\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:41 crc kubenswrapper[4772]: I1122 11:01:41.979447 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgbw\" (UniqueName: \"kubernetes.io/projected/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-kube-api-access-bsgbw\") pod \"nova-metadata-0\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " pod="openstack/nova-metadata-0" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.118366 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:01:42 crc kubenswrapper[4772]: E1122 11:01:42.544569 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020 is running failed: container process not found" containerID="daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:01:42 crc kubenswrapper[4772]: E1122 11:01:42.552527 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020 is running failed: container process not found" containerID="daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:01:42 crc kubenswrapper[4772]: E1122 11:01:42.553571 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020 is running failed: container process not found" containerID="daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:01:42 crc kubenswrapper[4772]: E1122 11:01:42.553630 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" containerName="nova-scheduler-scheduler" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.598909 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.710855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerStarted","Data":"b5fcd47e65a0e4ea1964a938ce840cc6ff920ac4d63bd2832080d8d8fb916b52"} Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.713199 4772 generic.go:334] "Generic (PLEG): container finished" podID="2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" containerID="daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" exitCode=0 Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.713237 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729","Type":"ContainerDied","Data":"daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020"} Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.713263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729","Type":"ContainerDied","Data":"385ca193addceaccc344a7db82c8304bb8a406af210de1e4b137a967d048e48e"} Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.713283 4772 scope.go:117] "RemoveContainer" containerID="daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.713288 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.739407 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.744092 4772 scope.go:117] "RemoveContainer" containerID="daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" Nov 22 11:01:42 crc kubenswrapper[4772]: E1122 11:01:42.744651 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020\": container with ID starting with daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020 not found: ID does not exist" containerID="daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.744767 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020"} err="failed to get container status \"daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020\": rpc error: code = NotFound desc = could not find container \"daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020\": container with ID starting with daf101e9baca55c1359edac43394a17dbef438e47cb34e279d70d09bcfcd7020 not found: ID does not exist" Nov 22 11:01:42 crc kubenswrapper[4772]: W1122 11:01:42.748409 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cffe78d_a3e8_4fd3_a97a_03c361381b8b.slice/crio-8a3ae76369e3a0878c326fe484b6fa0c6f15dcc84344919f78b803107eed889b WatchSource:0}: Error finding container 8a3ae76369e3a0878c326fe484b6fa0c6f15dcc84344919f78b803107eed889b: Status 404 returned error can't find the container with id 8a3ae76369e3a0878c326fe484b6fa0c6f15dcc84344919f78b803107eed889b Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.786198 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-combined-ca-bundle\") pod \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.786312 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-config-data\") pod \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.786474 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx46f\" (UniqueName: \"kubernetes.io/projected/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-kube-api-access-fx46f\") pod \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\" (UID: \"2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729\") " Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.792383 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-kube-api-access-fx46f" (OuterVolumeSpecName: "kube-api-access-fx46f") pod "2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" (UID: "2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729"). InnerVolumeSpecName "kube-api-access-fx46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.813071 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-config-data" (OuterVolumeSpecName: "config-data") pod "2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" (UID: "2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.819583 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" (UID: "2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.888514 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx46f\" (UniqueName: \"kubernetes.io/projected/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-kube-api-access-fx46f\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.888745 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:42 crc kubenswrapper[4772]: I1122 11:01:42.888813 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.271112 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.280115 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.294037 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:43 crc kubenswrapper[4772]: E1122 11:01:43.294477 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" containerName="nova-scheduler-scheduler" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.294489 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" containerName="nova-scheduler-scheduler" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.294689 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" containerName="nova-scheduler-scheduler" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.295349 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.299701 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-config-data\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.299742 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlbb5\" (UniqueName: \"kubernetes.io/projected/1f2864f9-ec60-4f46-b8be-e83e62cbea68-kube-api-access-vlbb5\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.299766 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.301230 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.328542 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.401577 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-config-data\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.401967 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlbb5\" (UniqueName: \"kubernetes.io/projected/1f2864f9-ec60-4f46-b8be-e83e62cbea68-kube-api-access-vlbb5\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.401998 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.408001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.409711 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-config-data\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.420677 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlbb5\" (UniqueName: \"kubernetes.io/projected/1f2864f9-ec60-4f46-b8be-e83e62cbea68-kube-api-access-vlbb5\") pod \"nova-scheduler-0\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.429616 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729" path="/var/lib/kubelet/pods/2fe6c0ca-665a-41a1-a3ea-d47bb0d8c729/volumes" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.430251 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56e4f18-2d41-4fb4-88a3-8889d928298d" path="/var/lib/kubelet/pods/a56e4f18-2d41-4fb4-88a3-8889d928298d/volumes" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.638293 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.726855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerStarted","Data":"8053e588b79aeda4083795e3cb87d3690c2e049625e6c7c4bbdecd0aaef89d29"} Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.728373 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cffe78d-a3e8-4fd3-a97a-03c361381b8b","Type":"ContainerStarted","Data":"31f7057dd4180088b13431bf5ba33899a475abdef2f3ebd4e1c314141844cc53"} Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.728405 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cffe78d-a3e8-4fd3-a97a-03c361381b8b","Type":"ContainerStarted","Data":"d25429e38c703e93c2adcab07f77d6c9e3e6238496e08aacd006a6c74b47ed9e"} Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.728418 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cffe78d-a3e8-4fd3-a97a-03c361381b8b","Type":"ContainerStarted","Data":"8a3ae76369e3a0878c326fe484b6fa0c6f15dcc84344919f78b803107eed889b"} Nov 22 11:01:43 crc kubenswrapper[4772]: I1122 11:01:43.751996 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.751976714 podStartE2EDuration="2.751976714s" podCreationTimestamp="2025-11-22 11:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:43.74418925 +0000 UTC m=+1423.983633754" watchObservedRunningTime="2025-11-22 11:01:43.751976714 +0000 UTC m=+1423.991421208" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.092622 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:01:44 crc kubenswrapper[4772]: W1122 11:01:44.100285 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f2864f9_ec60_4f46_b8be_e83e62cbea68.slice/crio-047577aec939000fc071b50546e33ebbec88f9f8f7b3beab1106b3f2db21b3a6 WatchSource:0}: Error finding container 047577aec939000fc071b50546e33ebbec88f9f8f7b3beab1106b3f2db21b3a6: Status 404 returned error can't find the container with id 047577aec939000fc071b50546e33ebbec88f9f8f7b3beab1106b3f2db21b3a6 Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.724667 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.752475 4772 generic.go:334] "Generic (PLEG): container finished" podID="a35672ad-5999-4891-972a-1461f9928b94" containerID="cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b" exitCode=0 Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.752559 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35672ad-5999-4891-972a-1461f9928b94","Type":"ContainerDied","Data":"cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b"} Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.752593 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a35672ad-5999-4891-972a-1461f9928b94","Type":"ContainerDied","Data":"668b5fdf88e8782a5bfb9ec67cc7cfaf2550de069464b67c7d970733153434c9"} Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.752613 4772 scope.go:117] "RemoveContainer" containerID="cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.752800 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.765218 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerStarted","Data":"e4fb40fcf554fb6c5016c56c3011ab05b4f070d2be2f22bb270e51e64aff16f3"} Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.769119 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f2864f9-ec60-4f46-b8be-e83e62cbea68","Type":"ContainerStarted","Data":"ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78"} Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.769210 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f2864f9-ec60-4f46-b8be-e83e62cbea68","Type":"ContainerStarted","Data":"047577aec939000fc071b50546e33ebbec88f9f8f7b3beab1106b3f2db21b3a6"} Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.788999 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.788982021 podStartE2EDuration="1.788982021s" podCreationTimestamp="2025-11-22 11:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:44.785386851 +0000 UTC m=+1425.024831355" watchObservedRunningTime="2025-11-22 11:01:44.788982021 +0000 UTC m=+1425.028426515" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.797978 4772 scope.go:117] "RemoveContainer" containerID="e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.815550 4772 scope.go:117] "RemoveContainer" containerID="cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b" Nov 22 11:01:44 crc kubenswrapper[4772]: E1122 11:01:44.816002 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b\": container with ID starting with cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b not found: ID does not exist" containerID="cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.816050 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b"} err="failed to get container status \"cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b\": rpc error: code = NotFound desc = could not find container \"cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b\": container with ID starting with cbe62db989db8186a1e649c0bf95120e0bcff5070bb7d166e6acee3911dfa12b not found: ID does not exist" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.816095 4772 scope.go:117] "RemoveContainer" containerID="e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f" Nov 22 11:01:44 crc kubenswrapper[4772]: E1122 11:01:44.816406 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f\": container with ID starting with e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f not found: ID does not exist" containerID="e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.816427 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f"} err="failed to get container status \"e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f\": rpc error: code = NotFound desc = could not find container \"e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f\": container with ID starting with e2673e0bf6e91a0c663a0dbef2d88ab87004dd8445653d8acfc47393cb3d0f5f not found: ID does not exist" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.826462 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-config-data\") pod \"a35672ad-5999-4891-972a-1461f9928b94\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.826529 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q5g4\" (UniqueName: \"kubernetes.io/projected/a35672ad-5999-4891-972a-1461f9928b94-kube-api-access-6q5g4\") pod \"a35672ad-5999-4891-972a-1461f9928b94\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.826613 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35672ad-5999-4891-972a-1461f9928b94-logs\") pod \"a35672ad-5999-4891-972a-1461f9928b94\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.826637 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-combined-ca-bundle\") pod \"a35672ad-5999-4891-972a-1461f9928b94\" (UID: \"a35672ad-5999-4891-972a-1461f9928b94\") " Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.827830 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a35672ad-5999-4891-972a-1461f9928b94-logs" (OuterVolumeSpecName: "logs") pod "a35672ad-5999-4891-972a-1461f9928b94" (UID: "a35672ad-5999-4891-972a-1461f9928b94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.837616 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35672ad-5999-4891-972a-1461f9928b94-kube-api-access-6q5g4" (OuterVolumeSpecName: "kube-api-access-6q5g4") pod "a35672ad-5999-4891-972a-1461f9928b94" (UID: "a35672ad-5999-4891-972a-1461f9928b94"). InnerVolumeSpecName "kube-api-access-6q5g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.853112 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-config-data" (OuterVolumeSpecName: "config-data") pod "a35672ad-5999-4891-972a-1461f9928b94" (UID: "a35672ad-5999-4891-972a-1461f9928b94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.866583 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a35672ad-5999-4891-972a-1461f9928b94" (UID: "a35672ad-5999-4891-972a-1461f9928b94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.930754 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q5g4\" (UniqueName: \"kubernetes.io/projected/a35672ad-5999-4891-972a-1461f9928b94-kube-api-access-6q5g4\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.930788 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a35672ad-5999-4891-972a-1461f9928b94-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.930798 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:44 crc kubenswrapper[4772]: I1122 11:01:44.930807 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35672ad-5999-4891-972a-1461f9928b94-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.103144 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.110553 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.129478 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:45 crc kubenswrapper[4772]: E1122 11:01:45.129986 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-log" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.130007 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-log" Nov 22 11:01:45 crc kubenswrapper[4772]: E1122 11:01:45.130322 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-api" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.130336 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-api" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.130628 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-api" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.130657 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35672ad-5999-4891-972a-1461f9928b94" containerName="nova-api-log" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.131915 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.134515 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.143678 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.235708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgqf\" (UniqueName: \"kubernetes.io/projected/7502e19b-9558-477a-86b6-d5c0da99a0b0-kube-api-access-dlgqf\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.235868 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7502e19b-9558-477a-86b6-d5c0da99a0b0-logs\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.235945 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.236147 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-config-data\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.338725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7502e19b-9558-477a-86b6-d5c0da99a0b0-logs\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.338843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.338896 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-config-data\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.339013 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgqf\" (UniqueName: \"kubernetes.io/projected/7502e19b-9558-477a-86b6-d5c0da99a0b0-kube-api-access-dlgqf\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.339233 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7502e19b-9558-477a-86b6-d5c0da99a0b0-logs\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.342456 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-config-data\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.342734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.355506 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgqf\" (UniqueName: \"kubernetes.io/projected/7502e19b-9558-477a-86b6-d5c0da99a0b0-kube-api-access-dlgqf\") pod \"nova-api-0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.423960 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35672ad-5999-4891-972a-1461f9928b94" path="/var/lib/kubelet/pods/a35672ad-5999-4891-972a-1461f9928b94/volumes" Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.452984 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:01:45 crc kubenswrapper[4772]: W1122 11:01:45.960782 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7502e19b_9558_477a_86b6_d5c0da99a0b0.slice/crio-d2237b402c61b747f5e270bbf070c2a57cfe044c147ee5b1957d7122c2f11b96 WatchSource:0}: Error finding container d2237b402c61b747f5e270bbf070c2a57cfe044c147ee5b1957d7122c2f11b96: Status 404 returned error can't find the container with id d2237b402c61b747f5e270bbf070c2a57cfe044c147ee5b1957d7122c2f11b96 Nov 22 11:01:45 crc kubenswrapper[4772]: I1122 11:01:45.962048 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.027383 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.792449 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7502e19b-9558-477a-86b6-d5c0da99a0b0","Type":"ContainerStarted","Data":"3231890003bdf085bcf4cea7ee03ec19bfc2779a7d5c24558315441b09c1a730"} Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.792807 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7502e19b-9558-477a-86b6-d5c0da99a0b0","Type":"ContainerStarted","Data":"68ad3b8db21e8b2c5122aac8528c489459cfc44c5e0ece7cdb40e9fe0b8c4db3"} Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.792821 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7502e19b-9558-477a-86b6-d5c0da99a0b0","Type":"ContainerStarted","Data":"d2237b402c61b747f5e270bbf070c2a57cfe044c147ee5b1957d7122c2f11b96"} Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.794796 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerStarted","Data":"a1494096381354b7ffde89c679004222772a1c84a5a27fb783df077eef2dc9ec"} Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.794989 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.820228 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.820191522 podStartE2EDuration="1.820191522s" podCreationTimestamp="2025-11-22 11:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:46.811754321 +0000 UTC m=+1427.051198825" watchObservedRunningTime="2025-11-22 11:01:46.820191522 +0000 UTC m=+1427.059636016" Nov 22 11:01:46 crc kubenswrapper[4772]: I1122 11:01:46.837938 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.801294408 podStartE2EDuration="7.837916825s" podCreationTimestamp="2025-11-22 11:01:39 +0000 UTC" firstStartedPulling="2025-11-22 11:01:40.928968881 +0000 UTC m=+1421.168413375" lastFinishedPulling="2025-11-22 11:01:45.965591298 +0000 UTC m=+1426.205035792" observedRunningTime="2025-11-22 11:01:46.834257254 +0000 UTC m=+1427.073701748" watchObservedRunningTime="2025-11-22 11:01:46.837916825 +0000 UTC m=+1427.077361319" Nov 22 11:01:47 crc kubenswrapper[4772]: I1122 11:01:47.118855 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 11:01:47 crc kubenswrapper[4772]: I1122 11:01:47.119652 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 11:01:48 crc kubenswrapper[4772]: I1122 11:01:48.639589 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 11:01:48 crc kubenswrapper[4772]: I1122 11:01:48.816529 4772 generic.go:334] "Generic (PLEG): container finished" podID="3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" containerID="1deddd5e209d6dd7109a121b569853e5e7e08e0d376b5e08b2b32dcb070598a2" exitCode=0 Nov 22 11:01:48 crc kubenswrapper[4772]: I1122 11:01:48.816681 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-669bd" event={"ID":"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf","Type":"ContainerDied","Data":"1deddd5e209d6dd7109a121b569853e5e7e08e0d376b5e08b2b32dcb070598a2"} Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.209537 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.335739 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-config-data\") pod \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.335883 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-combined-ca-bundle\") pod \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.336072 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbflh\" (UniqueName: \"kubernetes.io/projected/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-kube-api-access-rbflh\") pod \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.336187 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-scripts\") pod \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\" (UID: \"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf\") " Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.341850 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-kube-api-access-rbflh" (OuterVolumeSpecName: "kube-api-access-rbflh") pod "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" (UID: "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf"). InnerVolumeSpecName "kube-api-access-rbflh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.342303 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-scripts" (OuterVolumeSpecName: "scripts") pod "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" (UID: "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.368547 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-config-data" (OuterVolumeSpecName: "config-data") pod "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" (UID: "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.384074 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" (UID: "3f52d3ae-6917-4b9a-9f1e-533a35e47aaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.438212 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.438252 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.438264 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbflh\" (UniqueName: \"kubernetes.io/projected/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-kube-api-access-rbflh\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.438276 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.839174 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-669bd" event={"ID":"3f52d3ae-6917-4b9a-9f1e-533a35e47aaf","Type":"ContainerDied","Data":"82e836b3a6638e27de9173243af781ec57b2cc4710a2ce17ad59b2121134529b"} Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.839225 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82e836b3a6638e27de9173243af781ec57b2cc4710a2ce17ad59b2121134529b" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.839503 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-669bd" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.914990 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 11:01:50 crc kubenswrapper[4772]: E1122 11:01:50.915692 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" containerName="nova-cell1-conductor-db-sync" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.915777 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" containerName="nova-cell1-conductor-db-sync" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.916043 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" containerName="nova-cell1-conductor-db-sync" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.916885 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.918812 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 11:01:50 crc kubenswrapper[4772]: I1122 11:01:50.925836 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.046393 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.046508 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ght4g\" (UniqueName: \"kubernetes.io/projected/a0a147b4-4445-4f7b-b22f-97db02340306-kube-api-access-ght4g\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.046846 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.149213 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.149560 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ght4g\" (UniqueName: \"kubernetes.io/projected/a0a147b4-4445-4f7b-b22f-97db02340306-kube-api-access-ght4g\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.149631 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.154635 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.161698 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.166534 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ght4g\" (UniqueName: \"kubernetes.io/projected/a0a147b4-4445-4f7b-b22f-97db02340306-kube-api-access-ght4g\") pod \"nova-cell1-conductor-0\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.233072 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:51 crc kubenswrapper[4772]: W1122 11:01:51.691064 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0a147b4_4445_4f7b_b22f_97db02340306.slice/crio-42fdc2a0bd3748a49952bb49944162680a11c90848fd8eff65c46bc287c5d970 WatchSource:0}: Error finding container 42fdc2a0bd3748a49952bb49944162680a11c90848fd8eff65c46bc287c5d970: Status 404 returned error can't find the container with id 42fdc2a0bd3748a49952bb49944162680a11c90848fd8eff65c46bc287c5d970 Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.695192 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 11:01:51 crc kubenswrapper[4772]: I1122 11:01:51.852542 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a0a147b4-4445-4f7b-b22f-97db02340306","Type":"ContainerStarted","Data":"42fdc2a0bd3748a49952bb49944162680a11c90848fd8eff65c46bc287c5d970"} Nov 22 11:01:52 crc kubenswrapper[4772]: I1122 11:01:52.119113 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 11:01:52 crc kubenswrapper[4772]: I1122 11:01:52.119415 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 11:01:52 crc kubenswrapper[4772]: I1122 11:01:52.863235 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a0a147b4-4445-4f7b-b22f-97db02340306","Type":"ContainerStarted","Data":"75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e"} Nov 22 11:01:53 crc kubenswrapper[4772]: I1122 11:01:53.131428 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 11:01:53 crc kubenswrapper[4772]: I1122 11:01:53.131428 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 11:01:53 crc kubenswrapper[4772]: I1122 11:01:53.639277 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 11:01:53 crc kubenswrapper[4772]: I1122 11:01:53.677373 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 11:01:53 crc kubenswrapper[4772]: I1122 11:01:53.697926 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.697904706 podStartE2EDuration="3.697904706s" podCreationTimestamp="2025-11-22 11:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:01:52.891559289 +0000 UTC m=+1433.131003783" watchObservedRunningTime="2025-11-22 11:01:53.697904706 +0000 UTC m=+1433.937349200" Nov 22 11:01:53 crc kubenswrapper[4772]: I1122 11:01:53.871709 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:53 crc kubenswrapper[4772]: I1122 11:01:53.906789 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 11:01:55 crc kubenswrapper[4772]: I1122 11:01:55.453618 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:01:55 crc kubenswrapper[4772]: I1122 11:01:55.454016 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:01:56 crc kubenswrapper[4772]: I1122 11:01:56.258666 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 11:01:56 crc kubenswrapper[4772]: I1122 11:01:56.536310 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 11:01:56 crc kubenswrapper[4772]: I1122 11:01:56.536341 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 11:02:02 crc kubenswrapper[4772]: I1122 11:02:02.123379 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 11:02:02 crc kubenswrapper[4772]: I1122 11:02:02.124370 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 11:02:02 crc kubenswrapper[4772]: I1122 11:02:02.129319 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 11:02:02 crc kubenswrapper[4772]: I1122 11:02:02.958251 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 11:02:04 crc kubenswrapper[4772]: I1122 11:02:04.865965 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:04 crc kubenswrapper[4772]: I1122 11:02:04.930971 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-combined-ca-bundle\") pod \"bd173c58-6b77-43c7-b3de-6b358dab5df2\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " Nov 22 11:02:04 crc kubenswrapper[4772]: I1122 11:02:04.931133 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-config-data\") pod \"bd173c58-6b77-43c7-b3de-6b358dab5df2\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " Nov 22 11:02:04 crc kubenswrapper[4772]: I1122 11:02:04.931186 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ng8m\" (UniqueName: \"kubernetes.io/projected/bd173c58-6b77-43c7-b3de-6b358dab5df2-kube-api-access-7ng8m\") pod \"bd173c58-6b77-43c7-b3de-6b358dab5df2\" (UID: \"bd173c58-6b77-43c7-b3de-6b358dab5df2\") " Nov 22 11:02:04 crc kubenswrapper[4772]: I1122 11:02:04.936341 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd173c58-6b77-43c7-b3de-6b358dab5df2-kube-api-access-7ng8m" (OuterVolumeSpecName: "kube-api-access-7ng8m") pod "bd173c58-6b77-43c7-b3de-6b358dab5df2" (UID: "bd173c58-6b77-43c7-b3de-6b358dab5df2"). InnerVolumeSpecName "kube-api-access-7ng8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:04 crc kubenswrapper[4772]: I1122 11:02:04.960255 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd173c58-6b77-43c7-b3de-6b358dab5df2" (UID: "bd173c58-6b77-43c7-b3de-6b358dab5df2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:04 crc kubenswrapper[4772]: I1122 11:02:04.962233 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-config-data" (OuterVolumeSpecName: "config-data") pod "bd173c58-6b77-43c7-b3de-6b358dab5df2" (UID: "bd173c58-6b77-43c7-b3de-6b358dab5df2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.014447 4772 generic.go:334] "Generic (PLEG): container finished" podID="bd173c58-6b77-43c7-b3de-6b358dab5df2" containerID="24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4" exitCode=137 Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.014482 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd173c58-6b77-43c7-b3de-6b358dab5df2","Type":"ContainerDied","Data":"24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4"} Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.014522 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bd173c58-6b77-43c7-b3de-6b358dab5df2","Type":"ContainerDied","Data":"ef2f6c56d074b107ca2af9d5f11fec0b2cbae7aeffd4d28b499c90faa46f299b"} Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.014522 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.014537 4772 scope.go:117] "RemoveContainer" containerID="24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.036314 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.036366 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd173c58-6b77-43c7-b3de-6b358dab5df2-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.036376 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ng8m\" (UniqueName: \"kubernetes.io/projected/bd173c58-6b77-43c7-b3de-6b358dab5df2-kube-api-access-7ng8m\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.054620 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.056229 4772 scope.go:117] "RemoveContainer" containerID="24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4" Nov 22 11:02:05 crc kubenswrapper[4772]: E1122 11:02:05.056874 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4\": container with ID starting with 24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4 not found: ID does not exist" containerID="24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.056915 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4"} err="failed to get container status \"24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4\": rpc error: code = NotFound desc = could not find container \"24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4\": container with ID starting with 24ca65f5e6edfcc4edd9eea79766d66d6b57fa5f7c391ba129feeee0f2b360a4 not found: ID does not exist" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.071034 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.078784 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:02:05 crc kubenswrapper[4772]: E1122 11:02:05.079277 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd173c58-6b77-43c7-b3de-6b358dab5df2" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.079295 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd173c58-6b77-43c7-b3de-6b358dab5df2" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.079547 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd173c58-6b77-43c7-b3de-6b358dab5df2" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.080335 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.089287 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.103026 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.103344 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.103577 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.138098 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.138156 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.138199 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.138596 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq4mt\" (UniqueName: \"kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.138774 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.240259 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.240665 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.240691 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.240726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.240794 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq4mt\" (UniqueName: \"kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.246141 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.246422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.246765 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.247580 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.257460 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq4mt\" (UniqueName: \"kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.420616 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.424241 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd173c58-6b77-43c7-b3de-6b358dab5df2" path="/var/lib/kubelet/pods/bd173c58-6b77-43c7-b3de-6b358dab5df2/volumes" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.458633 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.458715 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.459627 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.459656 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.463630 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.465007 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.670064 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-7f4sv"] Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.681234 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.697016 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-7f4sv"] Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.754464 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.754526 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.754581 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.754602 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxzxk\" (UniqueName: \"kubernetes.io/projected/464d950a-e1bb-4efb-afdf-37b97a62a42c-kube-api-access-kxzxk\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.754633 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-config\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.754714 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.856343 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.856452 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.856490 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.856541 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.856565 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxzxk\" (UniqueName: \"kubernetes.io/projected/464d950a-e1bb-4efb-afdf-37b97a62a42c-kube-api-access-kxzxk\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.856592 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-config\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.857495 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.857683 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.857693 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-config\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.857823 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.858152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.887261 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxzxk\" (UniqueName: \"kubernetes.io/projected/464d950a-e1bb-4efb-afdf-37b97a62a42c-kube-api-access-kxzxk\") pod \"dnsmasq-dns-59cf4bdb65-7f4sv\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:05 crc kubenswrapper[4772]: W1122 11:02:05.957805 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fbd4e9d_9635_462d_abba_763daf0da369.slice/crio-b2026cf4626b6475bc4df4f61b59a362897ae9c87c8c3483f94afec6886981b9 WatchSource:0}: Error finding container b2026cf4626b6475bc4df4f61b59a362897ae9c87c8c3483f94afec6886981b9: Status 404 returned error can't find the container with id b2026cf4626b6475bc4df4f61b59a362897ae9c87c8c3483f94afec6886981b9 Nov 22 11:02:05 crc kubenswrapper[4772]: I1122 11:02:05.965923 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:02:06 crc kubenswrapper[4772]: I1122 11:02:06.016637 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:06 crc kubenswrapper[4772]: I1122 11:02:06.027948 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5fbd4e9d-9635-462d-abba-763daf0da369","Type":"ContainerStarted","Data":"b2026cf4626b6475bc4df4f61b59a362897ae9c87c8c3483f94afec6886981b9"} Nov 22 11:02:06 crc kubenswrapper[4772]: I1122 11:02:06.485313 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-7f4sv"] Nov 22 11:02:06 crc kubenswrapper[4772]: W1122 11:02:06.485785 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464d950a_e1bb_4efb_afdf_37b97a62a42c.slice/crio-690442cf60d33fc446c07120637e389eebffb80dc60308153c0aba4261d3f584 WatchSource:0}: Error finding container 690442cf60d33fc446c07120637e389eebffb80dc60308153c0aba4261d3f584: Status 404 returned error can't find the container with id 690442cf60d33fc446c07120637e389eebffb80dc60308153c0aba4261d3f584 Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.036655 4772 generic.go:334] "Generic (PLEG): container finished" podID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerID="8e13976df3c8001f16e78d82b5bbd4129ae0c1c72af18a09b2f2744b1feeb66a" exitCode=0 Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.036709 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" event={"ID":"464d950a-e1bb-4efb-afdf-37b97a62a42c","Type":"ContainerDied","Data":"8e13976df3c8001f16e78d82b5bbd4129ae0c1c72af18a09b2f2744b1feeb66a"} Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.038008 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" event={"ID":"464d950a-e1bb-4efb-afdf-37b97a62a42c","Type":"ContainerStarted","Data":"690442cf60d33fc446c07120637e389eebffb80dc60308153c0aba4261d3f584"} Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.040285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5fbd4e9d-9635-462d-abba-763daf0da369","Type":"ContainerStarted","Data":"38b682e02083c87881ca0840c5c8512286c619c93d70f45cc0f97311ece2d602"} Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.086363 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.086344806 podStartE2EDuration="2.086344806s" podCreationTimestamp="2025-11-22 11:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:02:07.080414248 +0000 UTC m=+1447.319858742" watchObservedRunningTime="2025-11-22 11:02:07.086344806 +0000 UTC m=+1447.325789300" Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.721447 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.722011 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-central-agent" containerID="cri-o://b5fcd47e65a0e4ea1964a938ce840cc6ff920ac4d63bd2832080d8d8fb916b52" gracePeriod=30 Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.722091 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="proxy-httpd" containerID="cri-o://a1494096381354b7ffde89c679004222772a1c84a5a27fb783df077eef2dc9ec" gracePeriod=30 Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.722137 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="sg-core" containerID="cri-o://e4fb40fcf554fb6c5016c56c3011ab05b4f070d2be2f22bb270e51e64aff16f3" gracePeriod=30 Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.722262 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-notification-agent" containerID="cri-o://8053e588b79aeda4083795e3cb87d3690c2e049625e6c7c4bbdecd0aaef89d29" gracePeriod=30 Nov 22 11:02:07 crc kubenswrapper[4772]: I1122 11:02:07.822950 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.192:3000/\": read tcp 10.217.0.2:33500->10.217.0.192:3000: read: connection reset by peer" Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.055811 4772 generic.go:334] "Generic (PLEG): container finished" podID="851484de-f073-4750-85c4-cdcfda340b52" containerID="a1494096381354b7ffde89c679004222772a1c84a5a27fb783df077eef2dc9ec" exitCode=0 Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.055849 4772 generic.go:334] "Generic (PLEG): container finished" podID="851484de-f073-4750-85c4-cdcfda340b52" containerID="e4fb40fcf554fb6c5016c56c3011ab05b4f070d2be2f22bb270e51e64aff16f3" exitCode=2 Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.055884 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerDied","Data":"a1494096381354b7ffde89c679004222772a1c84a5a27fb783df077eef2dc9ec"} Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.055933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerDied","Data":"e4fb40fcf554fb6c5016c56c3011ab05b4f070d2be2f22bb270e51e64aff16f3"} Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.059696 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" event={"ID":"464d950a-e1bb-4efb-afdf-37b97a62a42c","Type":"ContainerStarted","Data":"c427c59ce5190c28d69f76cce2242b1dd0afaeda292d697cff4fa07ba14a6523"} Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.081374 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" podStartSLOduration=3.081356452 podStartE2EDuration="3.081356452s" podCreationTimestamp="2025-11-22 11:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:02:08.075461995 +0000 UTC m=+1448.314906489" watchObservedRunningTime="2025-11-22 11:02:08.081356452 +0000 UTC m=+1448.320800946" Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.095859 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.096282 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-api" containerID="cri-o://3231890003bdf085bcf4cea7ee03ec19bfc2779a7d5c24558315441b09c1a730" gracePeriod=30 Nov 22 11:02:08 crc kubenswrapper[4772]: I1122 11:02:08.096138 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-log" containerID="cri-o://68ad3b8db21e8b2c5122aac8528c489459cfc44c5e0ece7cdb40e9fe0b8c4db3" gracePeriod=30 Nov 22 11:02:09 crc kubenswrapper[4772]: I1122 11:02:09.070813 4772 generic.go:334] "Generic (PLEG): container finished" podID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerID="68ad3b8db21e8b2c5122aac8528c489459cfc44c5e0ece7cdb40e9fe0b8c4db3" exitCode=143 Nov 22 11:02:09 crc kubenswrapper[4772]: I1122 11:02:09.070999 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7502e19b-9558-477a-86b6-d5c0da99a0b0","Type":"ContainerDied","Data":"68ad3b8db21e8b2c5122aac8528c489459cfc44c5e0ece7cdb40e9fe0b8c4db3"} Nov 22 11:02:09 crc kubenswrapper[4772]: I1122 11:02:09.074921 4772 generic.go:334] "Generic (PLEG): container finished" podID="851484de-f073-4750-85c4-cdcfda340b52" containerID="b5fcd47e65a0e4ea1964a938ce840cc6ff920ac4d63bd2832080d8d8fb916b52" exitCode=0 Nov 22 11:02:09 crc kubenswrapper[4772]: I1122 11:02:09.075919 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerDied","Data":"b5fcd47e65a0e4ea1964a938ce840cc6ff920ac4d63bd2832080d8d8fb916b52"} Nov 22 11:02:09 crc kubenswrapper[4772]: I1122 11:02:09.075963 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.096024 4772 generic.go:334] "Generic (PLEG): container finished" podID="851484de-f073-4750-85c4-cdcfda340b52" containerID="8053e588b79aeda4083795e3cb87d3690c2e049625e6c7c4bbdecd0aaef89d29" exitCode=0 Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.097680 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerDied","Data":"8053e588b79aeda4083795e3cb87d3690c2e049625e6c7c4bbdecd0aaef89d29"} Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.207374 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247386 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-combined-ca-bundle\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247437 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-sg-core-conf-yaml\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247501 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-log-httpd\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247523 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfsjx\" (UniqueName: \"kubernetes.io/projected/851484de-f073-4750-85c4-cdcfda340b52-kube-api-access-cfsjx\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247578 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-run-httpd\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247600 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247661 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-ceilometer-tls-certs\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.247726 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-scripts\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.249406 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.249527 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.254788 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-scripts" (OuterVolumeSpecName: "scripts") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.254816 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851484de-f073-4750-85c4-cdcfda340b52-kube-api-access-cfsjx" (OuterVolumeSpecName: "kube-api-access-cfsjx") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "kube-api-access-cfsjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.290087 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.301686 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.325482 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.348669 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data" (OuterVolumeSpecName: "config-data") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.348773 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data\") pod \"851484de-f073-4750-85c4-cdcfda340b52\" (UID: \"851484de-f073-4750-85c4-cdcfda340b52\") " Nov 22 11:02:10 crc kubenswrapper[4772]: W1122 11:02:10.349353 4772 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/851484de-f073-4750-85c4-cdcfda340b52/volumes/kubernetes.io~secret/config-data Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349372 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data" (OuterVolumeSpecName: "config-data") pod "851484de-f073-4750-85c4-cdcfda340b52" (UID: "851484de-f073-4750-85c4-cdcfda340b52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349557 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349575 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349585 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349593 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfsjx\" (UniqueName: \"kubernetes.io/projected/851484de-f073-4750-85c4-cdcfda340b52-kube-api-access-cfsjx\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349604 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/851484de-f073-4750-85c4-cdcfda340b52-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349611 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349619 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.349628 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/851484de-f073-4750-85c4-cdcfda340b52-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:10 crc kubenswrapper[4772]: I1122 11:02:10.422123 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.106279 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"851484de-f073-4750-85c4-cdcfda340b52","Type":"ContainerDied","Data":"d1909cf3cd8c75a73299edffec0cac167a960a485115a50fc300c7595203e7d1"} Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.106334 4772 scope.go:117] "RemoveContainer" containerID="a1494096381354b7ffde89c679004222772a1c84a5a27fb783df077eef2dc9ec" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.106469 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.129444 4772 scope.go:117] "RemoveContainer" containerID="e4fb40fcf554fb6c5016c56c3011ab05b4f070d2be2f22bb270e51e64aff16f3" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.161330 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.164594 4772 scope.go:117] "RemoveContainer" containerID="8053e588b79aeda4083795e3cb87d3690c2e049625e6c7c4bbdecd0aaef89d29" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.175089 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.187858 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:02:11 crc kubenswrapper[4772]: E1122 11:02:11.188283 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="sg-core" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188305 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="sg-core" Nov 22 11:02:11 crc kubenswrapper[4772]: E1122 11:02:11.188351 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-notification-agent" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188360 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-notification-agent" Nov 22 11:02:11 crc kubenswrapper[4772]: E1122 11:02:11.188376 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="proxy-httpd" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188381 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="proxy-httpd" Nov 22 11:02:11 crc kubenswrapper[4772]: E1122 11:02:11.188392 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-central-agent" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188397 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-central-agent" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188612 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="sg-core" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188629 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-notification-agent" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188649 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="ceilometer-central-agent" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.188659 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="851484de-f073-4750-85c4-cdcfda340b52" containerName="proxy-httpd" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.190369 4772 scope.go:117] "RemoveContainer" containerID="b5fcd47e65a0e4ea1964a938ce840cc6ff920ac4d63bd2832080d8d8fb916b52" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.191597 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.194379 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.194639 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.194754 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.198427 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.267432 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdbgm\" (UniqueName: \"kubernetes.io/projected/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-kube-api-access-bdbgm\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.267484 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-scripts\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.267661 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-log-httpd\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.267699 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.267782 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-config-data\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.267819 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-run-httpd\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.267955 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.268063 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377270 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-log-httpd\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377331 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377408 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-config-data\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377448 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-run-httpd\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377525 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377643 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdbgm\" (UniqueName: \"kubernetes.io/projected/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-kube-api-access-bdbgm\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.377675 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-scripts\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.378419 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-run-httpd\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.378734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-log-httpd\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.383801 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-scripts\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.384900 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-config-data\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.386893 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.386970 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.393287 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.396175 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdbgm\" (UniqueName: \"kubernetes.io/projected/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-kube-api-access-bdbgm\") pod \"ceilometer-0\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " pod="openstack/ceilometer-0" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.427287 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851484de-f073-4750-85c4-cdcfda340b52" path="/var/lib/kubelet/pods/851484de-f073-4750-85c4-cdcfda340b52/volumes" Nov 22 11:02:11 crc kubenswrapper[4772]: I1122 11:02:11.519513 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.003560 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.119422 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerStarted","Data":"cb05dc364fcba82f1cea9840e9418bf6120dd59be237c253b9dcfbd83c44aac8"} Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.127392 4772 generic.go:334] "Generic (PLEG): container finished" podID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerID="3231890003bdf085bcf4cea7ee03ec19bfc2779a7d5c24558315441b09c1a730" exitCode=0 Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.127476 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7502e19b-9558-477a-86b6-d5c0da99a0b0","Type":"ContainerDied","Data":"3231890003bdf085bcf4cea7ee03ec19bfc2779a7d5c24558315441b09c1a730"} Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.252281 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.399081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlgqf\" (UniqueName: \"kubernetes.io/projected/7502e19b-9558-477a-86b6-d5c0da99a0b0-kube-api-access-dlgqf\") pod \"7502e19b-9558-477a-86b6-d5c0da99a0b0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.399193 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-config-data\") pod \"7502e19b-9558-477a-86b6-d5c0da99a0b0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.399245 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-combined-ca-bundle\") pod \"7502e19b-9558-477a-86b6-d5c0da99a0b0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.399374 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7502e19b-9558-477a-86b6-d5c0da99a0b0-logs\") pod \"7502e19b-9558-477a-86b6-d5c0da99a0b0\" (UID: \"7502e19b-9558-477a-86b6-d5c0da99a0b0\") " Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.400189 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7502e19b-9558-477a-86b6-d5c0da99a0b0-logs" (OuterVolumeSpecName: "logs") pod "7502e19b-9558-477a-86b6-d5c0da99a0b0" (UID: "7502e19b-9558-477a-86b6-d5c0da99a0b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.423430 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7502e19b-9558-477a-86b6-d5c0da99a0b0-kube-api-access-dlgqf" (OuterVolumeSpecName: "kube-api-access-dlgqf") pod "7502e19b-9558-477a-86b6-d5c0da99a0b0" (UID: "7502e19b-9558-477a-86b6-d5c0da99a0b0"). InnerVolumeSpecName "kube-api-access-dlgqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.431741 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-config-data" (OuterVolumeSpecName: "config-data") pod "7502e19b-9558-477a-86b6-d5c0da99a0b0" (UID: "7502e19b-9558-477a-86b6-d5c0da99a0b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.433709 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7502e19b-9558-477a-86b6-d5c0da99a0b0" (UID: "7502e19b-9558-477a-86b6-d5c0da99a0b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.503108 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.503151 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7502e19b-9558-477a-86b6-d5c0da99a0b0-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.503164 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlgqf\" (UniqueName: \"kubernetes.io/projected/7502e19b-9558-477a-86b6-d5c0da99a0b0-kube-api-access-dlgqf\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.503179 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7502e19b-9558-477a-86b6-d5c0da99a0b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:12 crc kubenswrapper[4772]: I1122 11:02:12.647375 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.138706 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7502e19b-9558-477a-86b6-d5c0da99a0b0","Type":"ContainerDied","Data":"d2237b402c61b747f5e270bbf070c2a57cfe044c147ee5b1957d7122c2f11b96"} Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.138780 4772 scope.go:117] "RemoveContainer" containerID="3231890003bdf085bcf4cea7ee03ec19bfc2779a7d5c24558315441b09c1a730" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.138730 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.141583 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerStarted","Data":"10ca89615b0daee19e2ff35e6822a5cb0d8fad7528fc2fae6ffe6155cd72db74"} Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.161482 4772 scope.go:117] "RemoveContainer" containerID="68ad3b8db21e8b2c5122aac8528c489459cfc44c5e0ece7cdb40e9fe0b8c4db3" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.177502 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.197033 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.211419 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:13 crc kubenswrapper[4772]: E1122 11:02:13.211997 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-log" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.212022 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-log" Nov 22 11:02:13 crc kubenswrapper[4772]: E1122 11:02:13.212036 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-api" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.212059 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-api" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.212310 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-api" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.212335 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" containerName="nova-api-log" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.213324 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.215333 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.219922 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.220038 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.221994 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.319826 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.319892 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6229f\" (UniqueName: \"kubernetes.io/projected/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-kube-api-access-6229f\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.319946 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.319992 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-config-data\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.320019 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.320176 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-logs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.421575 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.421646 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6229f\" (UniqueName: \"kubernetes.io/projected/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-kube-api-access-6229f\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.421680 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.421734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-config-data\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.421757 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.421800 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-logs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.422306 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-logs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.426669 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.426886 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.427425 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.428211 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-config-data\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.432384 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7502e19b-9558-477a-86b6-d5c0da99a0b0" path="/var/lib/kubelet/pods/7502e19b-9558-477a-86b6-d5c0da99a0b0/volumes" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.439374 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6229f\" (UniqueName: \"kubernetes.io/projected/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-kube-api-access-6229f\") pod \"nova-api-0\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " pod="openstack/nova-api-0" Nov 22 11:02:13 crc kubenswrapper[4772]: I1122 11:02:13.548403 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:14 crc kubenswrapper[4772]: I1122 11:02:14.023654 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:14 crc kubenswrapper[4772]: W1122 11:02:14.043316 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb6b69ed_e10f_4cde_9fcc_5ab41e148bb7.slice/crio-7c7350b0591f413c11070c9e8aa5526b12c89f3c07b3ee44570042e9d6a94c01 WatchSource:0}: Error finding container 7c7350b0591f413c11070c9e8aa5526b12c89f3c07b3ee44570042e9d6a94c01: Status 404 returned error can't find the container with id 7c7350b0591f413c11070c9e8aa5526b12c89f3c07b3ee44570042e9d6a94c01 Nov 22 11:02:14 crc kubenswrapper[4772]: I1122 11:02:14.164486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerStarted","Data":"f9a0eee7dffa91a6e45f6416ea3a94139da8a50ce7e232bc83c7eb4c3723250c"} Nov 22 11:02:14 crc kubenswrapper[4772]: I1122 11:02:14.168928 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7","Type":"ContainerStarted","Data":"7c7350b0591f413c11070c9e8aa5526b12c89f3c07b3ee44570042e9d6a94c01"} Nov 22 11:02:15 crc kubenswrapper[4772]: I1122 11:02:15.182109 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7","Type":"ContainerStarted","Data":"056ff354a11939a6cb7c6f7eb4674cafa978bb4d3dd22bcba14b2e68a86c3cd4"} Nov 22 11:02:15 crc kubenswrapper[4772]: I1122 11:02:15.182668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7","Type":"ContainerStarted","Data":"24760fac5e2b3913833d69c7cbcc08ecc9881ed61545a901a8685580249aad35"} Nov 22 11:02:15 crc kubenswrapper[4772]: I1122 11:02:15.187809 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerStarted","Data":"5e0e21415f001e21917dd3a32819ab9cb5db83f0c62abce63b6c361f7db09c0d"} Nov 22 11:02:15 crc kubenswrapper[4772]: I1122 11:02:15.209790 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.209763917 podStartE2EDuration="2.209763917s" podCreationTimestamp="2025-11-22 11:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:02:15.209129811 +0000 UTC m=+1455.448574325" watchObservedRunningTime="2025-11-22 11:02:15.209763917 +0000 UTC m=+1455.449208421" Nov 22 11:02:15 crc kubenswrapper[4772]: I1122 11:02:15.440124 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:15 crc kubenswrapper[4772]: I1122 11:02:15.445527 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.018254 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.106294 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-z97q7"] Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.107069 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" podUID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerName="dnsmasq-dns" containerID="cri-o://259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993" gracePeriod=10 Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.200010 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerStarted","Data":"eb806120517e41fc28276787644b8c800bdb795e19ae53859d7279e00db21cb7"} Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.201038 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.226650 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.93360965 podStartE2EDuration="5.22663351s" podCreationTimestamp="2025-11-22 11:02:11 +0000 UTC" firstStartedPulling="2025-11-22 11:02:12.020311637 +0000 UTC m=+1452.259756131" lastFinishedPulling="2025-11-22 11:02:15.313335497 +0000 UTC m=+1455.552779991" observedRunningTime="2025-11-22 11:02:16.22346599 +0000 UTC m=+1456.462910484" watchObservedRunningTime="2025-11-22 11:02:16.22663351 +0000 UTC m=+1456.466078004" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.230397 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.488146 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-x8m56"] Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.489549 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.495518 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.496214 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.518005 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x8m56"] Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.622774 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.623093 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-scripts\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.623231 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8h9z\" (UniqueName: \"kubernetes.io/projected/c276d580-d011-43e7-b79a-f584ec487dd0-kube-api-access-t8h9z\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.623331 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-config-data\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.703107 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.724857 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-nb\") pod \"7e0a890a-43a4-4264-8347-9c7421a368d5\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.724948 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-svc\") pod \"7e0a890a-43a4-4264-8347-9c7421a368d5\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.725138 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-swift-storage-0\") pod \"7e0a890a-43a4-4264-8347-9c7421a368d5\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.725206 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-sb\") pod \"7e0a890a-43a4-4264-8347-9c7421a368d5\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.725290 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-config\") pod \"7e0a890a-43a4-4264-8347-9c7421a368d5\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.725605 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.725689 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-scripts\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.725728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8h9z\" (UniqueName: \"kubernetes.io/projected/c276d580-d011-43e7-b79a-f584ec487dd0-kube-api-access-t8h9z\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.725766 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-config-data\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.738872 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.739014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-config-data\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.761278 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-scripts\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.766707 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8h9z\" (UniqueName: \"kubernetes.io/projected/c276d580-d011-43e7-b79a-f584ec487dd0-kube-api-access-t8h9z\") pod \"nova-cell1-cell-mapping-x8m56\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.810662 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e0a890a-43a4-4264-8347-9c7421a368d5" (UID: "7e0a890a-43a4-4264-8347-9c7421a368d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.812136 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7e0a890a-43a4-4264-8347-9c7421a368d5" (UID: "7e0a890a-43a4-4264-8347-9c7421a368d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.822477 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e0a890a-43a4-4264-8347-9c7421a368d5" (UID: "7e0a890a-43a4-4264-8347-9c7421a368d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.825264 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e0a890a-43a4-4264-8347-9c7421a368d5" (UID: "7e0a890a-43a4-4264-8347-9c7421a368d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.827233 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz6xl\" (UniqueName: \"kubernetes.io/projected/7e0a890a-43a4-4264-8347-9c7421a368d5-kube-api-access-dz6xl\") pod \"7e0a890a-43a4-4264-8347-9c7421a368d5\" (UID: \"7e0a890a-43a4-4264-8347-9c7421a368d5\") " Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.827752 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.827771 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.827780 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.827791 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.831109 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0a890a-43a4-4264-8347-9c7421a368d5-kube-api-access-dz6xl" (OuterVolumeSpecName: "kube-api-access-dz6xl") pod "7e0a890a-43a4-4264-8347-9c7421a368d5" (UID: "7e0a890a-43a4-4264-8347-9c7421a368d5"). InnerVolumeSpecName "kube-api-access-dz6xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.840726 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-config" (OuterVolumeSpecName: "config") pod "7e0a890a-43a4-4264-8347-9c7421a368d5" (UID: "7e0a890a-43a4-4264-8347-9c7421a368d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.863728 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.929309 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0a890a-43a4-4264-8347-9c7421a368d5-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:16 crc kubenswrapper[4772]: I1122 11:02:16.929348 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz6xl\" (UniqueName: \"kubernetes.io/projected/7e0a890a-43a4-4264-8347-9c7421a368d5-kube-api-access-dz6xl\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.211585 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerID="259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993" exitCode=0 Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.211670 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.211676 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" event={"ID":"7e0a890a-43a4-4264-8347-9c7421a368d5","Type":"ContainerDied","Data":"259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993"} Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.212307 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-z97q7" event={"ID":"7e0a890a-43a4-4264-8347-9c7421a368d5","Type":"ContainerDied","Data":"b52f8d8106fb0a21625d7b71580afc4a90c5fafb7217256aa108ac5b3df0dcbc"} Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.212344 4772 scope.go:117] "RemoveContainer" containerID="259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.286526 4772 scope.go:117] "RemoveContainer" containerID="52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.293165 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x8m56"] Nov 22 11:02:17 crc kubenswrapper[4772]: W1122 11:02:17.296983 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc276d580_d011_43e7_b79a_f584ec487dd0.slice/crio-fe6ed2cf050c4273bfc638d8ebf8b013a3e9e32711a8d4a3959ed2cd0fd2f47a WatchSource:0}: Error finding container fe6ed2cf050c4273bfc638d8ebf8b013a3e9e32711a8d4a3959ed2cd0fd2f47a: Status 404 returned error can't find the container with id fe6ed2cf050c4273bfc638d8ebf8b013a3e9e32711a8d4a3959ed2cd0fd2f47a Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.310787 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-z97q7"] Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.312785 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-z97q7"] Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.312806 4772 scope.go:117] "RemoveContainer" containerID="259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993" Nov 22 11:02:17 crc kubenswrapper[4772]: E1122 11:02:17.313436 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993\": container with ID starting with 259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993 not found: ID does not exist" containerID="259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.313463 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993"} err="failed to get container status \"259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993\": rpc error: code = NotFound desc = could not find container \"259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993\": container with ID starting with 259f7deb3da83686c595241a814a4f052df1c38fd218c7700632102292c54993 not found: ID does not exist" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.313506 4772 scope.go:117] "RemoveContainer" containerID="52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18" Nov 22 11:02:17 crc kubenswrapper[4772]: E1122 11:02:17.315468 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18\": container with ID starting with 52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18 not found: ID does not exist" containerID="52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.315495 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18"} err="failed to get container status \"52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18\": rpc error: code = NotFound desc = could not find container \"52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18\": container with ID starting with 52511b45b7647d1f39a9a699ced02c2266949083ee910ea2fa97306ad5f5af18 not found: ID does not exist" Nov 22 11:02:17 crc kubenswrapper[4772]: I1122 11:02:17.424942 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0a890a-43a4-4264-8347-9c7421a368d5" path="/var/lib/kubelet/pods/7e0a890a-43a4-4264-8347-9c7421a368d5/volumes" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.225285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x8m56" event={"ID":"c276d580-d011-43e7-b79a-f584ec487dd0","Type":"ContainerStarted","Data":"a1427fd2be53da6cc2337bfb8c045b8983f1ac992ce43349fdf6205ac77e793b"} Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.225334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x8m56" event={"ID":"c276d580-d011-43e7-b79a-f584ec487dd0","Type":"ContainerStarted","Data":"fe6ed2cf050c4273bfc638d8ebf8b013a3e9e32711a8d4a3959ed2cd0fd2f47a"} Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.239638 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-x8m56" podStartSLOduration=2.239623504 podStartE2EDuration="2.239623504s" podCreationTimestamp="2025-11-22 11:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:02:18.238859205 +0000 UTC m=+1458.478303699" watchObservedRunningTime="2025-11-22 11:02:18.239623504 +0000 UTC m=+1458.479067998" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.549230 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lcbnc"] Nov 22 11:02:18 crc kubenswrapper[4772]: E1122 11:02:18.549614 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerName="dnsmasq-dns" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.549630 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerName="dnsmasq-dns" Nov 22 11:02:18 crc kubenswrapper[4772]: E1122 11:02:18.549659 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerName="init" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.549666 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerName="init" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.549853 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0a890a-43a4-4264-8347-9c7421a368d5" containerName="dnsmasq-dns" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.551240 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.562614 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lcbnc"] Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.660497 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-utilities\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.660740 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-catalog-content\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.660834 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmqp\" (UniqueName: \"kubernetes.io/projected/61b050a7-334a-4bc6-916e-c3431375e87f-kube-api-access-fpmqp\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.762898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-utilities\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.763554 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-catalog-content\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.763725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpmqp\" (UniqueName: \"kubernetes.io/projected/61b050a7-334a-4bc6-916e-c3431375e87f-kube-api-access-fpmqp\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.763891 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-catalog-content\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.764003 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-utilities\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.789978 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpmqp\" (UniqueName: \"kubernetes.io/projected/61b050a7-334a-4bc6-916e-c3431375e87f-kube-api-access-fpmqp\") pod \"redhat-operators-lcbnc\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:18 crc kubenswrapper[4772]: I1122 11:02:18.870258 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:19 crc kubenswrapper[4772]: I1122 11:02:19.378590 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lcbnc"] Nov 22 11:02:20 crc kubenswrapper[4772]: I1122 11:02:20.243712 4772 generic.go:334] "Generic (PLEG): container finished" podID="61b050a7-334a-4bc6-916e-c3431375e87f" containerID="b61240af1f0df40b6fadcdb9169198033926a1f59e6f4f9d64af0e0b93c4c221" exitCode=0 Nov 22 11:02:20 crc kubenswrapper[4772]: I1122 11:02:20.243774 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcbnc" event={"ID":"61b050a7-334a-4bc6-916e-c3431375e87f","Type":"ContainerDied","Data":"b61240af1f0df40b6fadcdb9169198033926a1f59e6f4f9d64af0e0b93c4c221"} Nov 22 11:02:20 crc kubenswrapper[4772]: I1122 11:02:20.243990 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcbnc" event={"ID":"61b050a7-334a-4bc6-916e-c3431375e87f","Type":"ContainerStarted","Data":"2164f4a9765c63c28a857583105108b0cfd16ca8eab3ba37af27514dfa228871"} Nov 22 11:02:21 crc kubenswrapper[4772]: I1122 11:02:21.254979 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcbnc" event={"ID":"61b050a7-334a-4bc6-916e-c3431375e87f","Type":"ContainerStarted","Data":"9c855462a4841be90a10bfb78f6fcddc6b1ef7a6e63261ae31c25cdddc2ffc87"} Nov 22 11:02:22 crc kubenswrapper[4772]: I1122 11:02:22.265757 4772 generic.go:334] "Generic (PLEG): container finished" podID="61b050a7-334a-4bc6-916e-c3431375e87f" containerID="9c855462a4841be90a10bfb78f6fcddc6b1ef7a6e63261ae31c25cdddc2ffc87" exitCode=0 Nov 22 11:02:22 crc kubenswrapper[4772]: I1122 11:02:22.265899 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcbnc" event={"ID":"61b050a7-334a-4bc6-916e-c3431375e87f","Type":"ContainerDied","Data":"9c855462a4841be90a10bfb78f6fcddc6b1ef7a6e63261ae31c25cdddc2ffc87"} Nov 22 11:02:23 crc kubenswrapper[4772]: I1122 11:02:23.549032 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:02:23 crc kubenswrapper[4772]: I1122 11:02:23.549350 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:02:24 crc kubenswrapper[4772]: I1122 11:02:24.283335 4772 generic.go:334] "Generic (PLEG): container finished" podID="c276d580-d011-43e7-b79a-f584ec487dd0" containerID="a1427fd2be53da6cc2337bfb8c045b8983f1ac992ce43349fdf6205ac77e793b" exitCode=0 Nov 22 11:02:24 crc kubenswrapper[4772]: I1122 11:02:24.283390 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x8m56" event={"ID":"c276d580-d011-43e7-b79a-f584ec487dd0","Type":"ContainerDied","Data":"a1427fd2be53da6cc2337bfb8c045b8983f1ac992ce43349fdf6205ac77e793b"} Nov 22 11:02:24 crc kubenswrapper[4772]: I1122 11:02:24.563175 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 11:02:24 crc kubenswrapper[4772]: I1122 11:02:24.563227 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 11:02:25 crc kubenswrapper[4772]: I1122 11:02:25.322562 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcbnc" event={"ID":"61b050a7-334a-4bc6-916e-c3431375e87f","Type":"ContainerStarted","Data":"81afaf5cd86b4a9c34a0fac01c6a4783589cadbbc398e6c9f76f6f621b3a5a1a"} Nov 22 11:02:25 crc kubenswrapper[4772]: I1122 11:02:25.339682 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lcbnc" podStartSLOduration=2.571453864 podStartE2EDuration="7.339664019s" podCreationTimestamp="2025-11-22 11:02:18 +0000 UTC" firstStartedPulling="2025-11-22 11:02:20.245493212 +0000 UTC m=+1460.484937706" lastFinishedPulling="2025-11-22 11:02:25.013703367 +0000 UTC m=+1465.253147861" observedRunningTime="2025-11-22 11:02:25.338531061 +0000 UTC m=+1465.577975545" watchObservedRunningTime="2025-11-22 11:02:25.339664019 +0000 UTC m=+1465.579108523" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.691882 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.729742 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-scripts\") pod \"c276d580-d011-43e7-b79a-f584ec487dd0\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.729900 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8h9z\" (UniqueName: \"kubernetes.io/projected/c276d580-d011-43e7-b79a-f584ec487dd0-kube-api-access-t8h9z\") pod \"c276d580-d011-43e7-b79a-f584ec487dd0\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.729978 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-combined-ca-bundle\") pod \"c276d580-d011-43e7-b79a-f584ec487dd0\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.730179 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-config-data\") pod \"c276d580-d011-43e7-b79a-f584ec487dd0\" (UID: \"c276d580-d011-43e7-b79a-f584ec487dd0\") " Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.737188 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-scripts" (OuterVolumeSpecName: "scripts") pod "c276d580-d011-43e7-b79a-f584ec487dd0" (UID: "c276d580-d011-43e7-b79a-f584ec487dd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.737320 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c276d580-d011-43e7-b79a-f584ec487dd0-kube-api-access-t8h9z" (OuterVolumeSpecName: "kube-api-access-t8h9z") pod "c276d580-d011-43e7-b79a-f584ec487dd0" (UID: "c276d580-d011-43e7-b79a-f584ec487dd0"). InnerVolumeSpecName "kube-api-access-t8h9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.770190 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-config-data" (OuterVolumeSpecName: "config-data") pod "c276d580-d011-43e7-b79a-f584ec487dd0" (UID: "c276d580-d011-43e7-b79a-f584ec487dd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.778076 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c276d580-d011-43e7-b79a-f584ec487dd0" (UID: "c276d580-d011-43e7-b79a-f584ec487dd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.831951 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.831986 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.831998 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8h9z\" (UniqueName: \"kubernetes.io/projected/c276d580-d011-43e7-b79a-f584ec487dd0-kube-api-access-t8h9z\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:25.832012 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c276d580-d011-43e7-b79a-f584ec487dd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.338357 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x8m56" event={"ID":"c276d580-d011-43e7-b79a-f584ec487dd0","Type":"ContainerDied","Data":"fe6ed2cf050c4273bfc638d8ebf8b013a3e9e32711a8d4a3959ed2cd0fd2f47a"} Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.338395 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x8m56" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.338414 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6ed2cf050c4273bfc638d8ebf8b013a3e9e32711a8d4a3959ed2cd0fd2f47a" Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.486770 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.487039 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-log" containerID="cri-o://24760fac5e2b3913833d69c7cbcc08ecc9881ed61545a901a8685580249aad35" gracePeriod=30 Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.487110 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-api" containerID="cri-o://056ff354a11939a6cb7c6f7eb4674cafa978bb4d3dd22bcba14b2e68a86c3cd4" gracePeriod=30 Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.506038 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.506350 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1f2864f9-ec60-4f46-b8be-e83e62cbea68" containerName="nova-scheduler-scheduler" containerID="cri-o://ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78" gracePeriod=30 Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.519109 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.519525 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-metadata" containerID="cri-o://31f7057dd4180088b13431bf5ba33899a475abdef2f3ebd4e1c314141844cc53" gracePeriod=30 Nov 22 11:02:26 crc kubenswrapper[4772]: I1122 11:02:26.519386 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-log" containerID="cri-o://d25429e38c703e93c2adcab07f77d6c9e3e6238496e08aacd006a6c74b47ed9e" gracePeriod=30 Nov 22 11:02:27 crc kubenswrapper[4772]: I1122 11:02:27.357263 4772 generic.go:334] "Generic (PLEG): container finished" podID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerID="24760fac5e2b3913833d69c7cbcc08ecc9881ed61545a901a8685580249aad35" exitCode=143 Nov 22 11:02:27 crc kubenswrapper[4772]: I1122 11:02:27.357370 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7","Type":"ContainerDied","Data":"24760fac5e2b3913833d69c7cbcc08ecc9881ed61545a901a8685580249aad35"} Nov 22 11:02:27 crc kubenswrapper[4772]: I1122 11:02:27.370410 4772 generic.go:334] "Generic (PLEG): container finished" podID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerID="d25429e38c703e93c2adcab07f77d6c9e3e6238496e08aacd006a6c74b47ed9e" exitCode=143 Nov 22 11:02:27 crc kubenswrapper[4772]: I1122 11:02:27.370452 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cffe78d-a3e8-4fd3-a97a-03c361381b8b","Type":"ContainerDied","Data":"d25429e38c703e93c2adcab07f77d6c9e3e6238496e08aacd006a6c74b47ed9e"} Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.219972 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.273960 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-combined-ca-bundle\") pod \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.274348 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlbb5\" (UniqueName: \"kubernetes.io/projected/1f2864f9-ec60-4f46-b8be-e83e62cbea68-kube-api-access-vlbb5\") pod \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.274409 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-config-data\") pod \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\" (UID: \"1f2864f9-ec60-4f46-b8be-e83e62cbea68\") " Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.287842 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2864f9-ec60-4f46-b8be-e83e62cbea68-kube-api-access-vlbb5" (OuterVolumeSpecName: "kube-api-access-vlbb5") pod "1f2864f9-ec60-4f46-b8be-e83e62cbea68" (UID: "1f2864f9-ec60-4f46-b8be-e83e62cbea68"). InnerVolumeSpecName "kube-api-access-vlbb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.303658 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f2864f9-ec60-4f46-b8be-e83e62cbea68" (UID: "1f2864f9-ec60-4f46-b8be-e83e62cbea68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.304946 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-config-data" (OuterVolumeSpecName: "config-data") pod "1f2864f9-ec60-4f46-b8be-e83e62cbea68" (UID: "1f2864f9-ec60-4f46-b8be-e83e62cbea68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.377024 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.377169 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2864f9-ec60-4f46-b8be-e83e62cbea68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.377185 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlbb5\" (UniqueName: \"kubernetes.io/projected/1f2864f9-ec60-4f46-b8be-e83e62cbea68-kube-api-access-vlbb5\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.381457 4772 generic.go:334] "Generic (PLEG): container finished" podID="1f2864f9-ec60-4f46-b8be-e83e62cbea68" containerID="ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78" exitCode=0 Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.381519 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.381533 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f2864f9-ec60-4f46-b8be-e83e62cbea68","Type":"ContainerDied","Data":"ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78"} Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.381627 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f2864f9-ec60-4f46-b8be-e83e62cbea68","Type":"ContainerDied","Data":"047577aec939000fc071b50546e33ebbec88f9f8f7b3beab1106b3f2db21b3a6"} Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.381660 4772 scope.go:117] "RemoveContainer" containerID="ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.405246 4772 scope.go:117] "RemoveContainer" containerID="ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78" Nov 22 11:02:28 crc kubenswrapper[4772]: E1122 11:02:28.405781 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78\": container with ID starting with ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78 not found: ID does not exist" containerID="ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.405816 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78"} err="failed to get container status \"ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78\": rpc error: code = NotFound desc = could not find container \"ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78\": container with ID starting with ebdf6e23cc34d161b60670d04c1ec3b3fef6eec3102b2bcefd3bdd42321cee78 not found: ID does not exist" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.420314 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.427737 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.453208 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:02:28 crc kubenswrapper[4772]: E1122 11:02:28.453762 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c276d580-d011-43e7-b79a-f584ec487dd0" containerName="nova-manage" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.453784 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c276d580-d011-43e7-b79a-f584ec487dd0" containerName="nova-manage" Nov 22 11:02:28 crc kubenswrapper[4772]: E1122 11:02:28.453821 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2864f9-ec60-4f46-b8be-e83e62cbea68" containerName="nova-scheduler-scheduler" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.453829 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2864f9-ec60-4f46-b8be-e83e62cbea68" containerName="nova-scheduler-scheduler" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.454083 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2864f9-ec60-4f46-b8be-e83e62cbea68" containerName="nova-scheduler-scheduler" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.454107 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c276d580-d011-43e7-b79a-f584ec487dd0" containerName="nova-manage" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.454964 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.469551 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.470566 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.481964 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vs4g\" (UniqueName: \"kubernetes.io/projected/51f59313-1e0d-4877-9141-c32a7f72f84f-kube-api-access-2vs4g\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.482240 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.482666 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-config-data\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.587428 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.587863 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-config-data\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.588072 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vs4g\" (UniqueName: \"kubernetes.io/projected/51f59313-1e0d-4877-9141-c32a7f72f84f-kube-api-access-2vs4g\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.591397 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-config-data\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.591548 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.604100 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vs4g\" (UniqueName: \"kubernetes.io/projected/51f59313-1e0d-4877-9141-c32a7f72f84f-kube-api-access-2vs4g\") pod \"nova-scheduler-0\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.787650 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.871665 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:28 crc kubenswrapper[4772]: I1122 11:02:28.871715 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:29 crc kubenswrapper[4772]: I1122 11:02:29.324106 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:02:29 crc kubenswrapper[4772]: I1122 11:02:29.393341 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51f59313-1e0d-4877-9141-c32a7f72f84f","Type":"ContainerStarted","Data":"6ceddf383a371a1e227e0abb2847453f9f6994bf12fc4eac453c227718d98eba"} Nov 22 11:02:29 crc kubenswrapper[4772]: I1122 11:02:29.425792 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2864f9-ec60-4f46-b8be-e83e62cbea68" path="/var/lib/kubelet/pods/1f2864f9-ec60-4f46-b8be-e83e62cbea68/volumes" Nov 22 11:02:29 crc kubenswrapper[4772]: I1122 11:02:29.647415 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:47360->10.217.0.193:8775: read: connection reset by peer" Nov 22 11:02:29 crc kubenswrapper[4772]: I1122 11:02:29.647416 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:47358->10.217.0.193:8775: read: connection reset by peer" Nov 22 11:02:29 crc kubenswrapper[4772]: I1122 11:02:29.923329 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lcbnc" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="registry-server" probeResult="failure" output=< Nov 22 11:02:29 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 11:02:29 crc kubenswrapper[4772]: > Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.410879 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51f59313-1e0d-4877-9141-c32a7f72f84f","Type":"ContainerStarted","Data":"e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558"} Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.413496 4772 generic.go:334] "Generic (PLEG): container finished" podID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerID="31f7057dd4180088b13431bf5ba33899a475abdef2f3ebd4e1c314141844cc53" exitCode=0 Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.413591 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cffe78d-a3e8-4fd3-a97a-03c361381b8b","Type":"ContainerDied","Data":"31f7057dd4180088b13431bf5ba33899a475abdef2f3ebd4e1c314141844cc53"} Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.417182 4772 generic.go:334] "Generic (PLEG): container finished" podID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerID="056ff354a11939a6cb7c6f7eb4674cafa978bb4d3dd22bcba14b2e68a86c3cd4" exitCode=0 Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.417208 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7","Type":"ContainerDied","Data":"056ff354a11939a6cb7c6f7eb4674cafa978bb4d3dd22bcba14b2e68a86c3cd4"} Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.446087 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.446069713 podStartE2EDuration="2.446069713s" podCreationTimestamp="2025-11-22 11:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:02:30.438789111 +0000 UTC m=+1470.678233595" watchObservedRunningTime="2025-11-22 11:02:30.446069713 +0000 UTC m=+1470.685514207" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.709331 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.731908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsgbw\" (UniqueName: \"kubernetes.io/projected/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-kube-api-access-bsgbw\") pod \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.732039 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-nova-metadata-tls-certs\") pod \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.732125 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-combined-ca-bundle\") pod \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.732182 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-config-data\") pod \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.732234 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-logs\") pod \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\" (UID: \"3cffe78d-a3e8-4fd3-a97a-03c361381b8b\") " Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.733680 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-logs" (OuterVolumeSpecName: "logs") pod "3cffe78d-a3e8-4fd3-a97a-03c361381b8b" (UID: "3cffe78d-a3e8-4fd3-a97a-03c361381b8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.758221 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-kube-api-access-bsgbw" (OuterVolumeSpecName: "kube-api-access-bsgbw") pod "3cffe78d-a3e8-4fd3-a97a-03c361381b8b" (UID: "3cffe78d-a3e8-4fd3-a97a-03c361381b8b"). InnerVolumeSpecName "kube-api-access-bsgbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.807564 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-config-data" (OuterVolumeSpecName: "config-data") pod "3cffe78d-a3e8-4fd3-a97a-03c361381b8b" (UID: "3cffe78d-a3e8-4fd3-a97a-03c361381b8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.807597 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cffe78d-a3e8-4fd3-a97a-03c361381b8b" (UID: "3cffe78d-a3e8-4fd3-a97a-03c361381b8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.816966 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3cffe78d-a3e8-4fd3-a97a-03c361381b8b" (UID: "3cffe78d-a3e8-4fd3-a97a-03c361381b8b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.845310 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsgbw\" (UniqueName: \"kubernetes.io/projected/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-kube-api-access-bsgbw\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.845349 4772 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.845359 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.845368 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.845378 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cffe78d-a3e8-4fd3-a97a-03c361381b8b-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:30 crc kubenswrapper[4772]: I1122 11:02:30.970090 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.049514 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-public-tls-certs\") pod \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.049582 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-logs\") pod \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.049652 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6229f\" (UniqueName: \"kubernetes.io/projected/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-kube-api-access-6229f\") pod \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.049783 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-combined-ca-bundle\") pod \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.049908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-internal-tls-certs\") pod \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.049950 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-config-data\") pod \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\" (UID: \"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7\") " Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.050179 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-logs" (OuterVolumeSpecName: "logs") pod "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" (UID: "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.058077 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-kube-api-access-6229f" (OuterVolumeSpecName: "kube-api-access-6229f") pod "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" (UID: "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7"). InnerVolumeSpecName "kube-api-access-6229f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.075858 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-config-data" (OuterVolumeSpecName: "config-data") pod "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" (UID: "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.090730 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" (UID: "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.112578 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" (UID: "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.123745 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" (UID: "eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.152305 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.152347 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6229f\" (UniqueName: \"kubernetes.io/projected/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-kube-api-access-6229f\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.152364 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.152379 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.152390 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.152401 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.438634 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.438632 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cffe78d-a3e8-4fd3-a97a-03c361381b8b","Type":"ContainerDied","Data":"8a3ae76369e3a0878c326fe484b6fa0c6f15dcc84344919f78b803107eed889b"} Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.439966 4772 scope.go:117] "RemoveContainer" containerID="31f7057dd4180088b13431bf5ba33899a475abdef2f3ebd4e1c314141844cc53" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.441116 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.441103 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7","Type":"ContainerDied","Data":"7c7350b0591f413c11070c9e8aa5526b12c89f3c07b3ee44570042e9d6a94c01"} Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.460536 4772 scope.go:117] "RemoveContainer" containerID="d25429e38c703e93c2adcab07f77d6c9e3e6238496e08aacd006a6c74b47ed9e" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.483623 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.490325 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.508161 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: E1122 11:02:31.508980 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-log" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509016 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-log" Nov 22 11:02:31 crc kubenswrapper[4772]: E1122 11:02:31.509076 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-metadata" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509088 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-metadata" Nov 22 11:02:31 crc kubenswrapper[4772]: E1122 11:02:31.509106 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-log" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509113 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-log" Nov 22 11:02:31 crc kubenswrapper[4772]: E1122 11:02:31.509154 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-api" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509163 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-api" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509489 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-log" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509524 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-metadata" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509551 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" containerName="nova-metadata-log" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.509572 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" containerName="nova-api-api" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.511409 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.517500 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.517728 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.526332 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.533878 4772 scope.go:117] "RemoveContainer" containerID="056ff354a11939a6cb7c6f7eb4674cafa978bb4d3dd22bcba14b2e68a86c3cd4" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.538709 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.554547 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.572767 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkczn\" (UniqueName: \"kubernetes.io/projected/b8b92f55-36d8-4358-9b57-734762f225c4-kube-api-access-pkczn\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.581347 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.581457 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.581672 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-config-data\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.581697 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b92f55-36d8-4358-9b57-734762f225c4-logs\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.586596 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.603704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.606664 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.607014 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.607563 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.627968 4772 scope.go:117] "RemoveContainer" containerID="24760fac5e2b3913833d69c7cbcc08ecc9881ed61545a901a8685580249aad35" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.658136 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.684580 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fb678a-814f-4328-8b49-9226512bf10e-logs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.684718 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6dt\" (UniqueName: \"kubernetes.io/projected/56fb678a-814f-4328-8b49-9226512bf10e-kube-api-access-2c6dt\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.684824 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkczn\" (UniqueName: \"kubernetes.io/projected/b8b92f55-36d8-4358-9b57-734762f225c4-kube-api-access-pkczn\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.684862 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-config-data\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.684898 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.684961 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.685071 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.685132 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.685165 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-public-tls-certs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.685201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-config-data\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.685221 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b92f55-36d8-4358-9b57-734762f225c4-logs\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.685907 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b92f55-36d8-4358-9b57-734762f225c4-logs\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.691959 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.691994 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-config-data\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.694542 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.706633 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkczn\" (UniqueName: \"kubernetes.io/projected/b8b92f55-36d8-4358-9b57-734762f225c4-kube-api-access-pkczn\") pod \"nova-metadata-0\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.786590 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-config-data\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.786787 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.786834 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.786862 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-public-tls-certs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.786930 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fb678a-814f-4328-8b49-9226512bf10e-logs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.786956 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c6dt\" (UniqueName: \"kubernetes.io/projected/56fb678a-814f-4328-8b49-9226512bf10e-kube-api-access-2c6dt\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.787705 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fb678a-814f-4328-8b49-9226512bf10e-logs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.789871 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-config-data\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.792121 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-public-tls-certs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.793136 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.801849 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.805385 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c6dt\" (UniqueName: \"kubernetes.io/projected/56fb678a-814f-4328-8b49-9226512bf10e-kube-api-access-2c6dt\") pod \"nova-api-0\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " pod="openstack/nova-api-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.886459 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:02:31 crc kubenswrapper[4772]: I1122 11:02:31.930898 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:02:32 crc kubenswrapper[4772]: I1122 11:02:32.326484 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:02:32 crc kubenswrapper[4772]: I1122 11:02:32.421693 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:02:32 crc kubenswrapper[4772]: W1122 11:02:32.426953 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56fb678a_814f_4328_8b49_9226512bf10e.slice/crio-788db6820dc60e8f870454dad1039f3b9cb38f35945534a401b7ac384d1fc2d5 WatchSource:0}: Error finding container 788db6820dc60e8f870454dad1039f3b9cb38f35945534a401b7ac384d1fc2d5: Status 404 returned error can't find the container with id 788db6820dc60e8f870454dad1039f3b9cb38f35945534a401b7ac384d1fc2d5 Nov 22 11:02:32 crc kubenswrapper[4772]: I1122 11:02:32.452382 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56fb678a-814f-4328-8b49-9226512bf10e","Type":"ContainerStarted","Data":"788db6820dc60e8f870454dad1039f3b9cb38f35945534a401b7ac384d1fc2d5"} Nov 22 11:02:32 crc kubenswrapper[4772]: I1122 11:02:32.453619 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b92f55-36d8-4358-9b57-734762f225c4","Type":"ContainerStarted","Data":"764b6bb6a5ee39e7047bd726ab180a64aa79251d5579b833c4a5dceea602f6f4"} Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.423819 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cffe78d-a3e8-4fd3-a97a-03c361381b8b" path="/var/lib/kubelet/pods/3cffe78d-a3e8-4fd3-a97a-03c361381b8b/volumes" Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.425027 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7" path="/var/lib/kubelet/pods/eb6b69ed-e10f-4cde-9fcc-5ab41e148bb7/volumes" Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.464967 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56fb678a-814f-4328-8b49-9226512bf10e","Type":"ContainerStarted","Data":"87ddd492904d37b209c29a1ac6b15eb1b0ea478fe5009f7ccf2abbf8a98a52e8"} Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.465013 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56fb678a-814f-4328-8b49-9226512bf10e","Type":"ContainerStarted","Data":"ab42698c086434daa53f114239b03f9f09d42a90f526e9da455ca5fa44319783"} Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.467896 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b92f55-36d8-4358-9b57-734762f225c4","Type":"ContainerStarted","Data":"241a8eaefd8f667894261b22a62dc20eac70a2ffa0b3309654a9b9bcc88514de"} Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.467941 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b92f55-36d8-4358-9b57-734762f225c4","Type":"ContainerStarted","Data":"214c2acb33ea5a782af6be55ddcf02954762b130146c3e714c36840852bfafb4"} Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.487397 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.487376627 podStartE2EDuration="2.487376627s" podCreationTimestamp="2025-11-22 11:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:02:33.48150179 +0000 UTC m=+1473.720946304" watchObservedRunningTime="2025-11-22 11:02:33.487376627 +0000 UTC m=+1473.726821121" Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.506422 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.506400623 podStartE2EDuration="2.506400623s" podCreationTimestamp="2025-11-22 11:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:02:33.500083035 +0000 UTC m=+1473.739527529" watchObservedRunningTime="2025-11-22 11:02:33.506400623 +0000 UTC m=+1473.745845137" Nov 22 11:02:33 crc kubenswrapper[4772]: I1122 11:02:33.788510 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 11:02:36 crc kubenswrapper[4772]: I1122 11:02:36.887670 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 11:02:36 crc kubenswrapper[4772]: I1122 11:02:36.888077 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 11:02:38 crc kubenswrapper[4772]: I1122 11:02:38.789248 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 11:02:38 crc kubenswrapper[4772]: I1122 11:02:38.822786 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 11:02:38 crc kubenswrapper[4772]: I1122 11:02:38.922146 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:38 crc kubenswrapper[4772]: I1122 11:02:38.976221 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:39 crc kubenswrapper[4772]: I1122 11:02:39.555958 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 11:02:39 crc kubenswrapper[4772]: I1122 11:02:39.843008 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lcbnc"] Nov 22 11:02:40 crc kubenswrapper[4772]: I1122 11:02:40.533149 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lcbnc" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="registry-server" containerID="cri-o://81afaf5cd86b4a9c34a0fac01c6a4783589cadbbc398e6c9f76f6f621b3a5a1a" gracePeriod=2 Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.553843 4772 generic.go:334] "Generic (PLEG): container finished" podID="61b050a7-334a-4bc6-916e-c3431375e87f" containerID="81afaf5cd86b4a9c34a0fac01c6a4783589cadbbc398e6c9f76f6f621b3a5a1a" exitCode=0 Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.553928 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcbnc" event={"ID":"61b050a7-334a-4bc6-916e-c3431375e87f","Type":"ContainerDied","Data":"81afaf5cd86b4a9c34a0fac01c6a4783589cadbbc398e6c9f76f6f621b3a5a1a"} Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.554866 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.659797 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.783884 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-utilities\") pod \"61b050a7-334a-4bc6-916e-c3431375e87f\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.784099 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpmqp\" (UniqueName: \"kubernetes.io/projected/61b050a7-334a-4bc6-916e-c3431375e87f-kube-api-access-fpmqp\") pod \"61b050a7-334a-4bc6-916e-c3431375e87f\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.784160 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-catalog-content\") pod \"61b050a7-334a-4bc6-916e-c3431375e87f\" (UID: \"61b050a7-334a-4bc6-916e-c3431375e87f\") " Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.788722 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-utilities" (OuterVolumeSpecName: "utilities") pod "61b050a7-334a-4bc6-916e-c3431375e87f" (UID: "61b050a7-334a-4bc6-916e-c3431375e87f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.886249 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.891725 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.892608 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.900213 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b050a7-334a-4bc6-916e-c3431375e87f-kube-api-access-fpmqp" (OuterVolumeSpecName: "kube-api-access-fpmqp") pod "61b050a7-334a-4bc6-916e-c3431375e87f" (UID: "61b050a7-334a-4bc6-916e-c3431375e87f"). InnerVolumeSpecName "kube-api-access-fpmqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.932043 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.932118 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.942035 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61b050a7-334a-4bc6-916e-c3431375e87f" (UID: "61b050a7-334a-4bc6-916e-c3431375e87f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.988628 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpmqp\" (UniqueName: \"kubernetes.io/projected/61b050a7-334a-4bc6-916e-c3431375e87f-kube-api-access-fpmqp\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:41 crc kubenswrapper[4772]: I1122 11:02:41.988674 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b050a7-334a-4bc6-916e-c3431375e87f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.566929 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcbnc" event={"ID":"61b050a7-334a-4bc6-916e-c3431375e87f","Type":"ContainerDied","Data":"2164f4a9765c63c28a857583105108b0cfd16ca8eab3ba37af27514dfa228871"} Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.566960 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcbnc" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.566998 4772 scope.go:117] "RemoveContainer" containerID="81afaf5cd86b4a9c34a0fac01c6a4783589cadbbc398e6c9f76f6f621b3a5a1a" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.588520 4772 scope.go:117] "RemoveContainer" containerID="9c855462a4841be90a10bfb78f6fcddc6b1ef7a6e63261ae31c25cdddc2ffc87" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.615709 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lcbnc"] Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.621373 4772 scope.go:117] "RemoveContainer" containerID="b61240af1f0df40b6fadcdb9169198033926a1f59e6f4f9d64af0e0b93c4c221" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.624961 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lcbnc"] Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.904349 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.904364 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.946330 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 11:02:42 crc kubenswrapper[4772]: I1122 11:02:42.946573 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 11:02:43 crc kubenswrapper[4772]: I1122 11:02:43.426110 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" path="/var/lib/kubelet/pods/61b050a7-334a-4bc6-916e-c3431375e87f/volumes" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.431769 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2pnvv"] Nov 22 11:02:45 crc kubenswrapper[4772]: E1122 11:02:45.433475 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="registry-server" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.433641 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="registry-server" Nov 22 11:02:45 crc kubenswrapper[4772]: E1122 11:02:45.433757 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="extract-content" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.433828 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="extract-content" Nov 22 11:02:45 crc kubenswrapper[4772]: E1122 11:02:45.433930 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="extract-utilities" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.434004 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="extract-utilities" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.434335 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b050a7-334a-4bc6-916e-c3431375e87f" containerName="registry-server" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.436084 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2pnvv"] Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.436268 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.587232 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-utilities\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.587327 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-catalog-content\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.587352 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kk6l\" (UniqueName: \"kubernetes.io/projected/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-kube-api-access-5kk6l\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.688580 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-utilities\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.688652 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-catalog-content\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.688674 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kk6l\" (UniqueName: \"kubernetes.io/projected/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-kube-api-access-5kk6l\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.689334 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-catalog-content\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.689378 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-utilities\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.714829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kk6l\" (UniqueName: \"kubernetes.io/projected/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-kube-api-access-5kk6l\") pod \"community-operators-2pnvv\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:45 crc kubenswrapper[4772]: I1122 11:02:45.765835 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:46 crc kubenswrapper[4772]: I1122 11:02:46.316211 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2pnvv"] Nov 22 11:02:46 crc kubenswrapper[4772]: I1122 11:02:46.603002 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pnvv" event={"ID":"a4455aa9-5591-4f78-bcdd-21ae5f4078ac","Type":"ContainerStarted","Data":"be1f62233adddfd24b5e57f709004647e848835d00d776483c21d1ca2e519b96"} Nov 22 11:02:47 crc kubenswrapper[4772]: I1122 11:02:47.614840 4772 generic.go:334] "Generic (PLEG): container finished" podID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerID="1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4" exitCode=0 Nov 22 11:02:47 crc kubenswrapper[4772]: I1122 11:02:47.614880 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pnvv" event={"ID":"a4455aa9-5591-4f78-bcdd-21ae5f4078ac","Type":"ContainerDied","Data":"1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4"} Nov 22 11:02:48 crc kubenswrapper[4772]: I1122 11:02:48.626857 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pnvv" event={"ID":"a4455aa9-5591-4f78-bcdd-21ae5f4078ac","Type":"ContainerStarted","Data":"25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32"} Nov 22 11:02:49 crc kubenswrapper[4772]: I1122 11:02:49.636218 4772 generic.go:334] "Generic (PLEG): container finished" podID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerID="25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32" exitCode=0 Nov 22 11:02:49 crc kubenswrapper[4772]: I1122 11:02:49.636282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pnvv" event={"ID":"a4455aa9-5591-4f78-bcdd-21ae5f4078ac","Type":"ContainerDied","Data":"25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32"} Nov 22 11:02:50 crc kubenswrapper[4772]: I1122 11:02:50.648260 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pnvv" event={"ID":"a4455aa9-5591-4f78-bcdd-21ae5f4078ac","Type":"ContainerStarted","Data":"e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a"} Nov 22 11:02:50 crc kubenswrapper[4772]: I1122 11:02:50.674124 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2pnvv" podStartSLOduration=3.211583665 podStartE2EDuration="5.674104754s" podCreationTimestamp="2025-11-22 11:02:45 +0000 UTC" firstStartedPulling="2025-11-22 11:02:47.617268131 +0000 UTC m=+1487.856712615" lastFinishedPulling="2025-11-22 11:02:50.07978922 +0000 UTC m=+1490.319233704" observedRunningTime="2025-11-22 11:02:50.665411127 +0000 UTC m=+1490.904855621" watchObservedRunningTime="2025-11-22 11:02:50.674104754 +0000 UTC m=+1490.913549238" Nov 22 11:02:51 crc kubenswrapper[4772]: I1122 11:02:51.895088 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 11:02:51 crc kubenswrapper[4772]: I1122 11:02:51.897120 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 11:02:51 crc kubenswrapper[4772]: I1122 11:02:51.901919 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 11:02:51 crc kubenswrapper[4772]: I1122 11:02:51.938060 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 11:02:51 crc kubenswrapper[4772]: I1122 11:02:51.938844 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 11:02:51 crc kubenswrapper[4772]: I1122 11:02:51.947111 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 11:02:51 crc kubenswrapper[4772]: I1122 11:02:51.952486 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 11:02:52 crc kubenswrapper[4772]: I1122 11:02:52.666152 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 11:02:52 crc kubenswrapper[4772]: I1122 11:02:52.670184 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 11:02:52 crc kubenswrapper[4772]: I1122 11:02:52.671145 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 11:02:55 crc kubenswrapper[4772]: I1122 11:02:55.766323 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:55 crc kubenswrapper[4772]: I1122 11:02:55.766675 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:55 crc kubenswrapper[4772]: I1122 11:02:55.811319 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:56 crc kubenswrapper[4772]: I1122 11:02:56.748023 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:56 crc kubenswrapper[4772]: I1122 11:02:56.790463 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2pnvv"] Nov 22 11:02:58 crc kubenswrapper[4772]: I1122 11:02:58.717380 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2pnvv" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="registry-server" containerID="cri-o://e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a" gracePeriod=2 Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.186113 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.356591 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-catalog-content\") pod \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.356674 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kk6l\" (UniqueName: \"kubernetes.io/projected/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-kube-api-access-5kk6l\") pod \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.356924 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-utilities\") pod \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\" (UID: \"a4455aa9-5591-4f78-bcdd-21ae5f4078ac\") " Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.357877 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-utilities" (OuterVolumeSpecName: "utilities") pod "a4455aa9-5591-4f78-bcdd-21ae5f4078ac" (UID: "a4455aa9-5591-4f78-bcdd-21ae5f4078ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.364420 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-kube-api-access-5kk6l" (OuterVolumeSpecName: "kube-api-access-5kk6l") pod "a4455aa9-5591-4f78-bcdd-21ae5f4078ac" (UID: "a4455aa9-5591-4f78-bcdd-21ae5f4078ac"). InnerVolumeSpecName "kube-api-access-5kk6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.422447 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4455aa9-5591-4f78-bcdd-21ae5f4078ac" (UID: "a4455aa9-5591-4f78-bcdd-21ae5f4078ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.459216 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.459247 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.459260 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kk6l\" (UniqueName: \"kubernetes.io/projected/a4455aa9-5591-4f78-bcdd-21ae5f4078ac-kube-api-access-5kk6l\") on node \"crc\" DevicePath \"\"" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.730009 4772 generic.go:334] "Generic (PLEG): container finished" podID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerID="e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a" exitCode=0 Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.730140 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2pnvv" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.730097 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pnvv" event={"ID":"a4455aa9-5591-4f78-bcdd-21ae5f4078ac","Type":"ContainerDied","Data":"e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a"} Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.730203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2pnvv" event={"ID":"a4455aa9-5591-4f78-bcdd-21ae5f4078ac","Type":"ContainerDied","Data":"be1f62233adddfd24b5e57f709004647e848835d00d776483c21d1ca2e519b96"} Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.730231 4772 scope.go:117] "RemoveContainer" containerID="e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.754941 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2pnvv"] Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.763164 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2pnvv"] Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.771417 4772 scope.go:117] "RemoveContainer" containerID="25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.799445 4772 scope.go:117] "RemoveContainer" containerID="1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.844366 4772 scope.go:117] "RemoveContainer" containerID="e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a" Nov 22 11:02:59 crc kubenswrapper[4772]: E1122 11:02:59.844884 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a\": container with ID starting with e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a not found: ID does not exist" containerID="e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.844921 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a"} err="failed to get container status \"e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a\": rpc error: code = NotFound desc = could not find container \"e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a\": container with ID starting with e4aaaf2e3f28879af0b8c4d938221c741025251a1807568042e9eda3e4c9260a not found: ID does not exist" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.844950 4772 scope.go:117] "RemoveContainer" containerID="25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32" Nov 22 11:02:59 crc kubenswrapper[4772]: E1122 11:02:59.845410 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32\": container with ID starting with 25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32 not found: ID does not exist" containerID="25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.845463 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32"} err="failed to get container status \"25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32\": rpc error: code = NotFound desc = could not find container \"25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32\": container with ID starting with 25e38baee70855577366294828d6c5113afa3ab1673eb711834adf4f6af46e32 not found: ID does not exist" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.845498 4772 scope.go:117] "RemoveContainer" containerID="1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4" Nov 22 11:02:59 crc kubenswrapper[4772]: E1122 11:02:59.845886 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4\": container with ID starting with 1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4 not found: ID does not exist" containerID="1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4" Nov 22 11:02:59 crc kubenswrapper[4772]: I1122 11:02:59.845908 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4"} err="failed to get container status \"1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4\": rpc error: code = NotFound desc = could not find container \"1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4\": container with ID starting with 1118452e3039d27b08163bf8bdf34e34f7d96a4dd31c9ddae80f8623892df1a4 not found: ID does not exist" Nov 22 11:03:01 crc kubenswrapper[4772]: I1122 11:03:01.425071 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" path="/var/lib/kubelet/pods/a4455aa9-5591-4f78-bcdd-21ae5f4078ac/volumes" Nov 22 11:03:14 crc kubenswrapper[4772]: I1122 11:03:14.943000 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 11:03:14 crc kubenswrapper[4772]: I1122 11:03:14.943694 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="a4e681ba-088a-41b1-9b89-8bac928038e5" containerName="openstackclient" containerID="cri-o://94c9532e47a3e8f2deba93d357f982767f3bc9fd612be2d3ed8cd1f182488992" gracePeriod=2 Nov 22 11:03:14 crc kubenswrapper[4772]: I1122 11:03:14.981299 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.044764 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.144537 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glancee71c-account-delete-64r2z"] Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.145197 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e681ba-088a-41b1-9b89-8bac928038e5" containerName="openstackclient" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.145215 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e681ba-088a-41b1-9b89-8bac928038e5" containerName="openstackclient" Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.145229 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="registry-server" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.145237 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="registry-server" Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.145265 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="extract-utilities" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.145274 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="extract-utilities" Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.145327 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="extract-content" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.145335 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="extract-content" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.145685 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4455aa9-5591-4f78-bcdd-21ae5f4078ac" containerName="registry-server" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.145718 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e681ba-088a-41b1-9b89-8bac928038e5" containerName="openstackclient" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.146633 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glancee71c-account-delete-64r2z" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.174154 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glancee71c-account-delete-64r2z"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.189914 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-267ms"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.227297 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement8972-account-delete-pqznd"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.228575 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement8972-account-delete-pqznd" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.260383 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.260637 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="ovn-northd" containerID="cri-o://bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" gracePeriod=30 Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.260775 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="openstack-network-exporter" containerID="cri-o://56a525a9356c41405cf6232508a4af9e3b589cb3a12221c13f572a5936890d76" gracePeriod=30 Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.281801 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kqn\" (UniqueName: \"kubernetes.io/projected/706b8e5a-87b8-429e-aea7-e7e5f161182f-kube-api-access-b7kqn\") pod \"glancee71c-account-delete-64r2z\" (UID: \"706b8e5a-87b8-429e-aea7-e7e5f161182f\") " pod="openstack/glancee71c-account-delete-64r2z" Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.282127 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.282198 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data podName:5ce19f6b-73e1-48b9-810a-f9d97a14fe7b nodeName:}" failed. No retries permitted until 2025-11-22 11:03:15.782173212 +0000 UTC m=+1516.021617696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.321104 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement8972-account-delete-pqznd"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.390485 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6plx\" (UniqueName: \"kubernetes.io/projected/e723031c-0772-49f7-ba16-f635ddd53dcc-kube-api-access-b6plx\") pod \"placement8972-account-delete-pqznd\" (UID: \"e723031c-0772-49f7-ba16-f635ddd53dcc\") " pod="openstack/placement8972-account-delete-pqznd" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.390881 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kqn\" (UniqueName: \"kubernetes.io/projected/706b8e5a-87b8-429e-aea7-e7e5f161182f-kube-api-access-b7kqn\") pod \"glancee71c-account-delete-64r2z\" (UID: \"706b8e5a-87b8-429e-aea7-e7e5f161182f\") " pod="openstack/glancee71c-account-delete-64r2z" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.398184 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qvtmm"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.488370 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kqn\" (UniqueName: \"kubernetes.io/projected/706b8e5a-87b8-429e-aea7-e7e5f161182f-kube-api-access-b7kqn\") pod \"glancee71c-account-delete-64r2z\" (UID: \"706b8e5a-87b8-429e-aea7-e7e5f161182f\") " pod="openstack/glancee71c-account-delete-64r2z" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.493131 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6plx\" (UniqueName: \"kubernetes.io/projected/e723031c-0772-49f7-ba16-f635ddd53dcc-kube-api-access-b6plx\") pod \"placement8972-account-delete-pqznd\" (UID: \"e723031c-0772-49f7-ba16-f635ddd53dcc\") " pod="openstack/placement8972-account-delete-pqznd" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.501792 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glancee71c-account-delete-64r2z" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.535777 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6plx\" (UniqueName: \"kubernetes.io/projected/e723031c-0772-49f7-ba16-f635ddd53dcc-kube-api-access-b6plx\") pod \"placement8972-account-delete-pqznd\" (UID: \"e723031c-0772-49f7-ba16-f635ddd53dcc\") " pod="openstack/placement8972-account-delete-pqznd" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.536213 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rh9km"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.536450 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-rh9km" podUID="6f8983d0-bcff-45de-b158-351e12a0b0f3" containerName="openstack-network-exporter" containerID="cri-o://909a2fa4b2deee761e0eb8564a7f465913c706c02a0b3226886d10405cca066b" gracePeriod=30 Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.597971 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement8972-account-delete-pqznd" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.625094 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi0c20-account-delete-g5lnb"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.626817 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi0c20-account-delete-g5lnb" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.641135 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi0c20-account-delete-g5lnb"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.724579 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mr98f"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.757325 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mr98f"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.786366 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell05903-account-delete-c7857"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.788494 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05903-account-delete-c7857" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.805935 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwcq6\" (UniqueName: \"kubernetes.io/projected/9aeb3608-353b-4b44-8797-46affdc587a7-kube-api-access-gwcq6\") pod \"novaapi0c20-account-delete-g5lnb\" (UID: \"9aeb3608-353b-4b44-8797-46affdc587a7\") " pod="openstack/novaapi0c20-account-delete-g5lnb" Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.806004 4772 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-267ms" message="Exiting ovn-controller (1) " Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.806071 4772 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" containerID="cri-o://35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca" Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.806105 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:15 crc kubenswrapper[4772]: E1122 11:03:15.806164 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data podName:5ce19f6b-73e1-48b9-810a-f9d97a14fe7b nodeName:}" failed. No retries permitted until 2025-11-22 11:03:16.806146917 +0000 UTC m=+1517.045591411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.806108 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" containerID="cri-o://35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca" gracePeriod=30 Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.823892 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell05903-account-delete-c7857"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.873666 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tzx4g"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.899147 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tzx4g"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.923549 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smb8k\" (UniqueName: \"kubernetes.io/projected/7d122410-121a-47cd-9465-e5c6f85cf2b2-kube-api-access-smb8k\") pod \"novacell05903-account-delete-c7857\" (UID: \"7d122410-121a-47cd-9465-e5c6f85cf2b2\") " pod="openstack/novacell05903-account-delete-c7857" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.923960 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwcq6\" (UniqueName: \"kubernetes.io/projected/9aeb3608-353b-4b44-8797-46affdc587a7-kube-api-access-gwcq6\") pod \"novaapi0c20-account-delete-g5lnb\" (UID: \"9aeb3608-353b-4b44-8797-46affdc587a7\") " pod="openstack/novaapi0c20-account-delete-g5lnb" Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.928984 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican3436-account-delete-4w4qv"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.952491 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican3436-account-delete-4w4qv"] Nov 22 11:03:15 crc kubenswrapper[4772]: I1122 11:03:15.952581 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3436-account-delete-4w4qv" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.019172 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder2d34-account-delete-7qhqb"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.020635 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder2d34-account-delete-7qhqb" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.025270 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smb8k\" (UniqueName: \"kubernetes.io/projected/7d122410-121a-47cd-9465-e5c6f85cf2b2-kube-api-access-smb8k\") pod \"novacell05903-account-delete-c7857\" (UID: \"7d122410-121a-47cd-9465-e5c6f85cf2b2\") " pod="openstack/novacell05903-account-delete-c7857" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.035274 4772 generic.go:334] "Generic (PLEG): container finished" podID="4f74827a-8354-492b-b09d-350768ba912d" containerID="56a525a9356c41405cf6232508a4af9e3b589cb3a12221c13f572a5936890d76" exitCode=2 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.035311 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4f74827a-8354-492b-b09d-350768ba912d","Type":"ContainerDied","Data":"56a525a9356c41405cf6232508a4af9e3b589cb3a12221c13f572a5936890d76"} Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.038140 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder2d34-account-delete-7qhqb"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.059278 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwcq6\" (UniqueName: \"kubernetes.io/projected/9aeb3608-353b-4b44-8797-46affdc587a7-kube-api-access-gwcq6\") pod \"novaapi0c20-account-delete-g5lnb\" (UID: \"9aeb3608-353b-4b44-8797-46affdc587a7\") " pod="openstack/novaapi0c20-account-delete-g5lnb" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.060454 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.072709 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smb8k\" (UniqueName: \"kubernetes.io/projected/7d122410-121a-47cd-9465-e5c6f85cf2b2-kube-api-access-smb8k\") pod \"novacell05903-account-delete-c7857\" (UID: \"7d122410-121a-47cd-9465-e5c6f85cf2b2\") " pod="openstack/novacell05903-account-delete-c7857" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.081512 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-x8m56"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.095440 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi0c20-account-delete-g5lnb" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.097633 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-x8m56"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.128885 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rdns\" (UniqueName: \"kubernetes.io/projected/cb612c10-4436-4c79-b990-cbc7b403eed5-kube-api-access-7rdns\") pod \"barbican3436-account-delete-4w4qv\" (UID: \"cb612c10-4436-4c79-b990-cbc7b403eed5\") " pod="openstack/barbican3436-account-delete-4w4qv" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.129451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5mrl\" (UniqueName: \"kubernetes.io/projected/30007403-085b-4874-88b7-8b27426fd4f7-kube-api-access-g5mrl\") pod \"cinder2d34-account-delete-7qhqb\" (UID: \"30007403-085b-4874-88b7-8b27426fd4f7\") " pod="openstack/cinder2d34-account-delete-7qhqb" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.136577 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron45e9-account-delete-9pn28"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.138633 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron45e9-account-delete-9pn28" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.155255 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mktsv"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.204264 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mktsv"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.225429 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05903-account-delete-c7857" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.232059 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rdns\" (UniqueName: \"kubernetes.io/projected/cb612c10-4436-4c79-b990-cbc7b403eed5-kube-api-access-7rdns\") pod \"barbican3436-account-delete-4w4qv\" (UID: \"cb612c10-4436-4c79-b990-cbc7b403eed5\") " pod="openstack/barbican3436-account-delete-4w4qv" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.232156 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5mrl\" (UniqueName: \"kubernetes.io/projected/30007403-085b-4874-88b7-8b27426fd4f7-kube-api-access-g5mrl\") pod \"cinder2d34-account-delete-7qhqb\" (UID: \"30007403-085b-4874-88b7-8b27426fd4f7\") " pod="openstack/cinder2d34-account-delete-7qhqb" Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.234737 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.234790 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data podName:468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea nodeName:}" failed. No retries permitted until 2025-11-22 11:03:16.734774837 +0000 UTC m=+1516.974219331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data") pod "rabbitmq-server-0" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea") : configmap "rabbitmq-config-data" not found Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.241814 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron45e9-account-delete-9pn28"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.256109 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-7f4sv"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.256343 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" podUID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerName="dnsmasq-dns" containerID="cri-o://c427c59ce5190c28d69f76cce2242b1dd0afaeda292d697cff4fa07ba14a6523" gracePeriod=10 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.273488 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rdns\" (UniqueName: \"kubernetes.io/projected/cb612c10-4436-4c79-b990-cbc7b403eed5-kube-api-access-7rdns\") pod \"barbican3436-account-delete-4w4qv\" (UID: \"cb612c10-4436-4c79-b990-cbc7b403eed5\") " pod="openstack/barbican3436-account-delete-4w4qv" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.289773 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6tjbt"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.296306 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5mrl\" (UniqueName: \"kubernetes.io/projected/30007403-085b-4874-88b7-8b27426fd4f7-kube-api-access-g5mrl\") pod \"cinder2d34-account-delete-7qhqb\" (UID: \"30007403-085b-4874-88b7-8b27426fd4f7\") " pod="openstack/cinder2d34-account-delete-7qhqb" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.303028 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder2d34-account-delete-7qhqb" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.309200 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6tjbt"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.318941 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6mvdg"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.336393 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsd46\" (UniqueName: \"kubernetes.io/projected/941a38a8-56e0-4061-8891-0cd3815477a4-kube-api-access-vsd46\") pod \"neutron45e9-account-delete-9pn28\" (UID: \"941a38a8-56e0-4061-8891-0cd3815477a4\") " pod="openstack/neutron45e9-account-delete-9pn28" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.343501 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6mvdg"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.430243 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.430938 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="openstack-network-exporter" containerID="cri-o://ccf4d7895ba000a2440b35a71263cfe3dbaecea5399c4efbf9b39db555947784" gracePeriod=300 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.453434 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsd46\" (UniqueName: \"kubernetes.io/projected/941a38a8-56e0-4061-8891-0cd3815477a4-kube-api-access-vsd46\") pod \"neutron45e9-account-delete-9pn28\" (UID: \"941a38a8-56e0-4061-8891-0cd3815477a4\") " pod="openstack/neutron45e9-account-delete-9pn28" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.528174 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsd46\" (UniqueName: \"kubernetes.io/projected/941a38a8-56e0-4061-8891-0cd3815477a4-kube-api-access-vsd46\") pod \"neutron45e9-account-delete-9pn28\" (UID: \"941a38a8-56e0-4061-8891-0cd3815477a4\") " pod="openstack/neutron45e9-account-delete-9pn28" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.548097 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="ovsdbserver-sb" containerID="cri-o://2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37" gracePeriod=300 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.566565 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3436-account-delete-4w4qv" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.622077 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9mzxx"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.664154 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9mzxx"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.705397 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron45e9-account-delete-9pn28" Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.782142 4772 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 22 11:03:16 crc kubenswrapper[4772]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 11:03:16 crc kubenswrapper[4772]: + source /usr/local/bin/container-scripts/functions Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNBridge=br-int Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNRemote=tcp:localhost:6642 Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNEncapType=geneve Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNAvailabilityZones= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ EnableChassisAsGateway=true Nov 22 11:03:16 crc kubenswrapper[4772]: ++ PhysicalNetworks= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNHostName= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 11:03:16 crc kubenswrapper[4772]: ++ ovs_dir=/var/lib/openvswitch Nov 22 11:03:16 crc kubenswrapper[4772]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 11:03:16 crc kubenswrapper[4772]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 11:03:16 crc kubenswrapper[4772]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + sleep 0.5 Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + sleep 0.5 Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + cleanup_ovsdb_server_semaphore Nov 22 11:03:16 crc kubenswrapper[4772]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 11:03:16 crc kubenswrapper[4772]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 11:03:16 crc kubenswrapper[4772]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-qvtmm" message=< Nov 22 11:03:16 crc kubenswrapper[4772]: Exiting ovsdb-server (5) [ OK ] Nov 22 11:03:16 crc kubenswrapper[4772]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 11:03:16 crc kubenswrapper[4772]: + source /usr/local/bin/container-scripts/functions Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNBridge=br-int Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNRemote=tcp:localhost:6642 Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNEncapType=geneve Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNAvailabilityZones= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ EnableChassisAsGateway=true Nov 22 11:03:16 crc kubenswrapper[4772]: ++ PhysicalNetworks= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNHostName= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 11:03:16 crc kubenswrapper[4772]: ++ ovs_dir=/var/lib/openvswitch Nov 22 11:03:16 crc kubenswrapper[4772]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 11:03:16 crc kubenswrapper[4772]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 11:03:16 crc kubenswrapper[4772]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + sleep 0.5 Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + sleep 0.5 Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + cleanup_ovsdb_server_semaphore Nov 22 11:03:16 crc kubenswrapper[4772]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 11:03:16 crc kubenswrapper[4772]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 11:03:16 crc kubenswrapper[4772]: > Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.782205 4772 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 22 11:03:16 crc kubenswrapper[4772]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 11:03:16 crc kubenswrapper[4772]: + source /usr/local/bin/container-scripts/functions Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNBridge=br-int Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNRemote=tcp:localhost:6642 Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNEncapType=geneve Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNAvailabilityZones= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ EnableChassisAsGateway=true Nov 22 11:03:16 crc kubenswrapper[4772]: ++ PhysicalNetworks= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ OVNHostName= Nov 22 11:03:16 crc kubenswrapper[4772]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 11:03:16 crc kubenswrapper[4772]: ++ ovs_dir=/var/lib/openvswitch Nov 22 11:03:16 crc kubenswrapper[4772]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 11:03:16 crc kubenswrapper[4772]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 11:03:16 crc kubenswrapper[4772]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + sleep 0.5 Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + sleep 0.5 Nov 22 11:03:16 crc kubenswrapper[4772]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 11:03:16 crc kubenswrapper[4772]: + cleanup_ovsdb_server_semaphore Nov 22 11:03:16 crc kubenswrapper[4772]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 11:03:16 crc kubenswrapper[4772]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 11:03:16 crc kubenswrapper[4772]: > pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" containerID="cri-o://b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.782278 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" containerID="cri-o://b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" gracePeriod=29 Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.829210 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.829290 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data podName:5ce19f6b-73e1-48b9-810a-f9d97a14fe7b nodeName:}" failed. No retries permitted until 2025-11-22 11:03:18.829273766 +0000 UTC m=+1519.068718250 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.829500 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 11:03:16 crc kubenswrapper[4772]: E1122 11:03:16.829529 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data podName:468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea nodeName:}" failed. No retries permitted until 2025-11-22 11:03:17.829520422 +0000 UTC m=+1518.068964916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data") pod "rabbitmq-server-0" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea") : configmap "rabbitmq-config-data" not found Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.845762 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.846417 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="openstack-network-exporter" containerID="cri-o://a439d4617a333c8788694a3ee6b6ab83b29b0e870445341c5c8b15c85d68a363" gracePeriod=300 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.875539 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5cd786c776-rmj8k"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.875825 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5cd786c776-rmj8k" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-log" containerID="cri-o://b95964ad218161628ba9d6df7f28f6d82327565a7c20500e4082e1b8d7b0c9c3" gracePeriod=30 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.875968 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5cd786c776-rmj8k" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-api" containerID="cri-o://c3ff3bf5f8075eb82c99bb11cb292366102815d9d517ae626765713d727efaff" gracePeriod=30 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.897278 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.897629 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-log" containerID="cri-o://d2ff64d96dfba7abbcacbdddbc68b2ab55e205bcc9422fdf3d5388dd6cf5273f" gracePeriod=30 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.897899 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-httpd" containerID="cri-o://59510dd7b9579831eb08695d39ab17e9efd8ec1346988dbb2e6af8437f7ad097" gracePeriod=30 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.920776 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.929864 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run\") pod \"25144c09-6edb-4bd3-89b2-99db486e733b\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.932730 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run" (OuterVolumeSpecName: "var-run") pod "25144c09-6edb-4bd3-89b2-99db486e733b" (UID: "25144c09-6edb-4bd3-89b2-99db486e733b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.935177 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-combined-ca-bundle\") pod \"25144c09-6edb-4bd3-89b2-99db486e733b\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.935343 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-ovn-controller-tls-certs\") pod \"25144c09-6edb-4bd3-89b2-99db486e733b\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.935399 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-log-ovn\") pod \"25144c09-6edb-4bd3-89b2-99db486e733b\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.935462 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4bfb\" (UniqueName: \"kubernetes.io/projected/25144c09-6edb-4bd3-89b2-99db486e733b-kube-api-access-c4bfb\") pod \"25144c09-6edb-4bd3-89b2-99db486e733b\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.935489 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25144c09-6edb-4bd3-89b2-99db486e733b-scripts\") pod \"25144c09-6edb-4bd3-89b2-99db486e733b\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.935506 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run-ovn\") pod \"25144c09-6edb-4bd3-89b2-99db486e733b\" (UID: \"25144c09-6edb-4bd3-89b2-99db486e733b\") " Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.936111 4772 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.936172 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "25144c09-6edb-4bd3-89b2-99db486e733b" (UID: "25144c09-6edb-4bd3-89b2-99db486e733b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.937299 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "25144c09-6edb-4bd3-89b2-99db486e733b" (UID: "25144c09-6edb-4bd3-89b2-99db486e733b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.943159 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25144c09-6edb-4bd3-89b2-99db486e733b-scripts" (OuterVolumeSpecName: "scripts") pod "25144c09-6edb-4bd3-89b2-99db486e733b" (UID: "25144c09-6edb-4bd3-89b2-99db486e733b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.958630 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-4gl45"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.976891 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-4gl45"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.981452 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25144c09-6edb-4bd3-89b2-99db486e733b-kube-api-access-c4bfb" (OuterVolumeSpecName: "kube-api-access-c4bfb") pod "25144c09-6edb-4bd3-89b2-99db486e733b" (UID: "25144c09-6edb-4bd3-89b2-99db486e733b"). InnerVolumeSpecName "kube-api-access-c4bfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.983450 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" containerID="cri-o://b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" gracePeriod=29 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.993792 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.993999 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-log" containerID="cri-o://88ecd0459f0ac9488f0cd3eb8c402462803c773cf6ef7940b9aa2db2abf09dea" gracePeriod=30 Nov 22 11:03:16 crc kubenswrapper[4772]: I1122 11:03:16.994462 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-httpd" containerID="cri-o://dd542af28bce5c278e708a047b0757d9812c5e11e9dc0dff83889ad014c4b497" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.038152 4772 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.038189 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4bfb\" (UniqueName: \"kubernetes.io/projected/25144c09-6edb-4bd3-89b2-99db486e733b-kube-api-access-c4bfb\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.038203 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25144c09-6edb-4bd3-89b2-99db486e733b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.038213 4772 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25144c09-6edb-4bd3-89b2-99db486e733b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.049744 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.057689 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.057913 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-log" containerID="cri-o://214c2acb33ea5a782af6be55ddcf02954762b130146c3e714c36840852bfafb4" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.059959 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-metadata" containerID="cri-o://241a8eaefd8f667894261b22a62dc20eac70a2ffa0b3309654a9b9bcc88514de" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063576 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-server" containerID="cri-o://6c148c107a290e32467067529f8845e2cb396a10c79f269871d8b6dfe85c8538" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063681 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="swift-recon-cron" containerID="cri-o://aa61d5f2be67c3162272b709160c16d2b8cb7b6652be46a7ec677336065aa1ac" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063739 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="rsync" containerID="cri-o://d6a9ec495836f1834c42245ba492aa4e9ebb76dac50305dd8790c68f739b9277" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063781 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-updater" containerID="cri-o://bbe601a3871553a3d7d70b6b470ceebc522ddd2ed4d9823f1a644a888ecc03ff" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063820 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-auditor" containerID="cri-o://5b75dbfffe8e1c9d76456ff93f1c9c0f4bf16b63767e3e50ca83e8220c802021" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063861 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-replicator" containerID="cri-o://20bfb81f40cbc7f6e93279c153402c5ff3a9a24099eb68f5d5aa22251bdfd63e" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063903 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-server" containerID="cri-o://2aa38c5be9a613db8f9237edd2803eb9c3a6730dcaecfe363e7fdf603819c43c" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063942 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-expirer" containerID="cri-o://4bb4a8713445b470473fcae6ce67f357ca8b02ad81def05ee9cb94ebea5ebf50" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.063986 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-updater" containerID="cri-o://04dbfb695ea067220096d958b7c9f722332cd1273836354417eb1f3cad0efc67" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.064029 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-auditor" containerID="cri-o://dd0dd2fc38d88b49b28baf152ee2dced29cbe3336d9498d4ade9ee3c9adf12ee" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.064093 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-replicator" containerID="cri-o://71199f24f24db6b2a98a516ab206a62b13cd49e1da1c3a11e4c911c568e4f32b" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.064135 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-server" containerID="cri-o://d99d87346c3370f3f3fba6fb00e3db55bfc3eab865e86ac6c50b67b5247c7837" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.064195 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-reaper" containerID="cri-o://6aeb1397a245f1d928583c71247392a935b287c4678959e266ee42e4285547dd" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.064232 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-auditor" containerID="cri-o://e643a9463572f69ee79ae91f043796a6db50894ddf2c847ea4435b5d3f1f8d4b" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.064273 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-replicator" containerID="cri-o://05273857f1de6f10d05b451d131edb562b0d717aa0fb09e91569b386fad68432" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.106842 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="ovsdbserver-nb" containerID="cri-o://c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94" gracePeriod=300 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.130621 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement8972-account-delete-pqznd"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.192152 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.193351 4772 generic.go:334] "Generic (PLEG): container finished" podID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerID="c427c59ce5190c28d69f76cce2242b1dd0afaeda292d697cff4fa07ba14a6523" exitCode=0 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.193409 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" event={"ID":"464d950a-e1bb-4efb-afdf-37b97a62a42c","Type":"ContainerDied","Data":"c427c59ce5190c28d69f76cce2242b1dd0afaeda292d697cff4fa07ba14a6523"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.212038 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rh9km_6f8983d0-bcff-45de-b158-351e12a0b0f3/openstack-network-exporter/0.log" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.212354 4772 generic.go:334] "Generic (PLEG): container finished" podID="6f8983d0-bcff-45de-b158-351e12a0b0f3" containerID="909a2fa4b2deee761e0eb8564a7f465913c706c02a0b3226886d10405cca066b" exitCode=2 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.212419 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rh9km" event={"ID":"6f8983d0-bcff-45de-b158-351e12a0b0f3","Type":"ContainerDied","Data":"909a2fa4b2deee761e0eb8564a7f465913c706c02a0b3226886d10405cca066b"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.221338 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25144c09-6edb-4bd3-89b2-99db486e733b" (UID: "25144c09-6edb-4bd3-89b2-99db486e733b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.221436 4772 generic.go:334] "Generic (PLEG): container finished" podID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" exitCode=0 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.221500 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerDied","Data":"b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.231998 4772 generic.go:334] "Generic (PLEG): container finished" podID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerID="a439d4617a333c8788694a3ee6b6ab83b29b0e870445341c5c8b15c85d68a363" exitCode=2 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.232082 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9","Type":"ContainerDied","Data":"a439d4617a333c8788694a3ee6b6ab83b29b0e870445341c5c8b15c85d68a363"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.244958 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.250868 4772 generic.go:334] "Generic (PLEG): container finished" podID="25144c09-6edb-4bd3-89b2-99db486e733b" containerID="35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca" exitCode=0 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.251227 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.251409 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms" event={"ID":"25144c09-6edb-4bd3-89b2-99db486e733b","Type":"ContainerDied","Data":"35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.251470 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-267ms" event={"ID":"25144c09-6edb-4bd3-89b2-99db486e733b","Type":"ContainerDied","Data":"8d98079661eb31b43e8762d65ae663b2a8e031a99edf213a3ada95232b4f871b"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.251494 4772 scope.go:117] "RemoveContainer" containerID="35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.254611 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.254893 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-log" containerID="cri-o://ab42698c086434daa53f114239b03f9f09d42a90f526e9da455ca5fa44319783" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.255003 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-api" containerID="cri-o://87ddd492904d37b209c29a1ac6b15eb1b0ea478fe5009f7ccf2abbf8a98a52e8" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.258873 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "25144c09-6edb-4bd3-89b2-99db486e733b" (UID: "25144c09-6edb-4bd3-89b2-99db486e733b"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.268539 4772 generic.go:334] "Generic (PLEG): container finished" podID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerID="b95964ad218161628ba9d6df7f28f6d82327565a7c20500e4082e1b8d7b0c9c3" exitCode=143 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.268648 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd786c776-rmj8k" event={"ID":"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7","Type":"ContainerDied","Data":"b95964ad218161628ba9d6df7f28f6d82327565a7c20500e4082e1b8d7b0c9c3"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.275231 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_62770fd6-1000-4477-ac95-7a4eaa489732/ovsdbserver-sb/0.log" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.275275 4772 generic.go:334] "Generic (PLEG): container finished" podID="62770fd6-1000-4477-ac95-7a4eaa489732" containerID="ccf4d7895ba000a2440b35a71263cfe3dbaecea5399c4efbf9b39db555947784" exitCode=2 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.275292 4772 generic.go:334] "Generic (PLEG): container finished" podID="62770fd6-1000-4477-ac95-7a4eaa489732" containerID="2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37" exitCode=143 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.275334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62770fd6-1000-4477-ac95-7a4eaa489732","Type":"ContainerDied","Data":"ccf4d7895ba000a2440b35a71263cfe3dbaecea5399c4efbf9b39db555947784"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.275359 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62770fd6-1000-4477-ac95-7a4eaa489732","Type":"ContainerDied","Data":"2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.277151 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-01a0-account-create-4cpjc"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.305128 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-01a0-account-create-4cpjc"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.307179 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement8972-account-delete-pqznd" event={"ID":"e723031c-0772-49f7-ba16-f635ddd53dcc","Type":"ContainerStarted","Data":"5c113279f0d6744fe689f5ad8c86ea24f34df31e0c63ed1f4d60ee8223914dfd"} Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.319285 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.319631 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api-log" containerID="cri-o://41caed95f9f668a055e34288ae91bcce5a6f3ea58f05250f45efb84f1f1c0fbf" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.319682 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api" containerID="cri-o://a687188010e7d1b6b6e71ce02eb4abc2bad75aaad585c817273d9a77d8fbf014" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.337148 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qhm8v"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.349181 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qhm8v"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.355433 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25144c09-6edb-4bd3-89b2-99db486e733b-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.362106 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8c79f8b65-qn7q9"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.362404 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8c79f8b65-qn7q9" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-api" containerID="cri-o://6bdbd4c4929eabf6a133a2e818bd65ac8febe68d8843b6b4e67d0a024f4e743f" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.362640 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8c79f8b65-qn7q9" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-httpd" containerID="cri-o://89b92e0a1e681be8f4f78a508d0ebcba29af7864b3c2db95e3d23d573dc85c86" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.402547 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.402851 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="cinder-scheduler" containerID="cri-o://89f3af719d3b34aa755006c4c157b86e9e231adc44922aa49a262366c3fbab3e" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.403037 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="probe" containerID="cri-o://508aa44be1af6ce7429f5cfe8151bfa33738cc58c627ec663c79b706c344ddb4" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.714485 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="146226df-eb2f-4fd3-a175-bccca5de564e" path="/var/lib/kubelet/pods/146226df-eb2f-4fd3-a175-bccca5de564e/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.722925 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c352191-5a61-4dc6-ba16-6c82cb0fdedf" path="/var/lib/kubelet/pods/1c352191-5a61-4dc6-ba16-6c82cb0fdedf/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.728923 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aae7bbd-5de9-46d8-83d6-80f97bed0bf4" path="/var/lib/kubelet/pods/8aae7bbd-5de9-46d8-83d6-80f97bed0bf4/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.769285 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa05e69-3162-4e89-925c-dd99d6a35bba" path="/var/lib/kubelet/pods/aaa05e69-3162-4e89-925c-dd99d6a35bba/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.770433 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3" path="/var/lib/kubelet/pods/ac5f43d3-c6db-43c9-bb90-1d22bbe94aa3/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.771201 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc50fc65-ff5f-43d6-945b-d52d8535ccde" path="/var/lib/kubelet/pods/bc50fc65-ff5f-43d6-945b-d52d8535ccde/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: E1122 11:03:17.834913 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 11:03:17 crc kubenswrapper[4772]: E1122 11:03:17.835001 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data podName:468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea nodeName:}" failed. No retries permitted until 2025-11-22 11:03:19.83498144 +0000 UTC m=+1520.074425934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data") pod "rabbitmq-server-0" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea") : configmap "rabbitmq-config-data" not found Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.798542 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c276d580-d011-43e7-b79a-f584ec487dd0" path="/var/lib/kubelet/pods/c276d580-d011-43e7-b79a-f584ec487dd0/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.852395 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dadab934-440c-4b33-8f4a-1790b0040061" path="/var/lib/kubelet/pods/dadab934-440c-4b33-8f4a-1790b0040061/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.853173 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb3b51a-da06-4a18-bc47-225aa06fff04" path="/var/lib/kubelet/pods/dfb3b51a-da06-4a18-bc47-225aa06fff04/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.853907 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb489d0e-dc04-4a25-8e89-ec9ede81a3cb" path="/var/lib/kubelet/pods/fb489d0e-dc04-4a25-8e89-ec9ede81a3cb/volumes" Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856559 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9846d"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856587 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8972-account-create-7lpfn"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856599 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9846d"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856698 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8972-account-create-7lpfn"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856723 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement8972-account-delete-pqznd"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856740 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856755 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-57b6cb6667-w95sj"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856776 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-c2l2l"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856787 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5903-account-create-hhqx8"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856800 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-c2l2l"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856815 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5903-account-create-hhqx8"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856830 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05903-account-delete-c7857"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856846 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jqkv7"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856856 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jqkv7"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856866 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856886 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican3436-account-delete-4w4qv"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856904 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-79459755b6-xsvzh"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.856919 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3436-account-create-n227m"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.857214 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener-log" containerID="cri-o://791a1dda016da276f7e60912d835e500d93685bd5cc31d54d2b396d39fcc8af1" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.857778 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5fbd4e9d-9635-462d-abba-763daf0da369" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://38b682e02083c87881ca0840c5c8512286c619c93d70f45cc0f97311ece2d602" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.858477 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-57b6cb6667-w95sj" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-httpd" containerID="cri-o://f2df47654803d93eba038dcb4866e8ad0d2e7d308fb39560cb0091e112aadb72" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.858559 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener" containerID="cri-o://844e023e7c3fccc854525c2a694623fa1a3482bbdd36a977a83ba8eb6cf3ab4b" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.858600 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-57b6cb6667-w95sj" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-server" containerID="cri-o://dfd79733bb340ac1878b2e319236d06e3fb8878f7900376f4a7fb9aa84b8711a" gracePeriod=30 Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.915147 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3436-account-create-n227m"] Nov 22 11:03:17 crc kubenswrapper[4772]: I1122 11:03:17.989408 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerName="rabbitmq" containerID="cri-o://6d2c4827d4cb49d5883df31ba20e437493bf839f2d63f4c0a3fdbedc0e23ec2c" gracePeriod=604800 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.003317 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6b5c6"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.026222 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder2d34-account-delete-7qhqb"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.033332 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerName="galera" containerID="cri-o://98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.037394 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2d34-account-create-mmrc4"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.047403 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6b5c6"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.062199 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2d34-account-create-mmrc4"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.062259 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5f797948bc-dk5pr"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.062529 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5f797948bc-dk5pr" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker-log" containerID="cri-o://deb156a613d4b361e96cb60957d663ef36ea4eb59d8168309e1f3c8cbbf8914f" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.063012 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5f797948bc-dk5pr" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker" containerID="cri-o://e40f637f7b43ff915a3b153426def590c2d29d02ddeac886a688e0d9bf7a29a8" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.072472 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-45e9-account-create-ttrmb"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.090631 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-45e9-account-create-ttrmb"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.113386 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-46ff9"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.139888 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-46ff9"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.160322 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron45e9-account-delete-9pn28"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.176408 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6876658948-bzr5z"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.176670 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6876658948-bzr5z" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api-log" containerID="cri-o://b5adee60fbbe04c2b0f6e677dd915852ef48874363c5ddda10829f388b38decb" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.176811 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6876658948-bzr5z" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api" containerID="cri-o://a06360ee3022a654f156c3386f22cd5fd488251afc8543f8c37cbc65fc693984" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.194458 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.194675 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="51f59313-1e0d-4877-9141-c32a7f72f84f" containerName="nova-scheduler-scheduler" containerID="cri-o://e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.218118 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.238413 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7q2nv"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.267308 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.267547 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="33b26633-94ac-4439-b1ab-ab225d2e562b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.276347 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7q2nv"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.284833 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.285037 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="a0a147b4-4445-4f7b-b22f-97db02340306" containerName="nova-cell1-conductor-conductor" containerID="cri-o://75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e" gracePeriod=30 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.306431 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-669bd"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.338485 4772 generic.go:334] "Generic (PLEG): container finished" podID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerID="f2df47654803d93eba038dcb4866e8ad0d2e7d308fb39560cb0091e112aadb72" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.339198 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57b6cb6667-w95sj" event={"ID":"13c1f859-42ed-484f-88cb-5349a7b64dda","Type":"ContainerDied","Data":"f2df47654803d93eba038dcb4866e8ad0d2e7d308fb39560cb0091e112aadb72"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.340846 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-669bd"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.354235 4772 generic.go:334] "Generic (PLEG): container finished" podID="56fb678a-814f-4328-8b49-9226512bf10e" containerID="ab42698c086434daa53f114239b03f9f09d42a90f526e9da455ca5fa44319783" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.354328 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56fb678a-814f-4328-8b49-9226512bf10e","Type":"ContainerDied","Data":"ab42698c086434daa53f114239b03f9f09d42a90f526e9da455ca5fa44319783"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.374565 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17f0d5ca-99e5-47c6-9fdf-1932956cff3e","Type":"ContainerDied","Data":"41caed95f9f668a055e34288ae91bcce5a6f3ea58f05250f45efb84f1f1c0fbf"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.374624 4772 generic.go:334] "Generic (PLEG): container finished" podID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerID="41caed95f9f668a055e34288ae91bcce5a6f3ea58f05250f45efb84f1f1c0fbf" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.392380 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" event={"ID":"464d950a-e1bb-4efb-afdf-37b97a62a42c","Type":"ContainerDied","Data":"690442cf60d33fc446c07120637e389eebffb80dc60308153c0aba4261d3f584"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.392419 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690442cf60d33fc446c07120637e389eebffb80dc60308153c0aba4261d3f584" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.410787 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rh9km_6f8983d0-bcff-45de-b158-351e12a0b0f3/openstack-network-exporter/0.log" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.410927 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rh9km" event={"ID":"6f8983d0-bcff-45de-b158-351e12a0b0f3","Type":"ContainerDied","Data":"a28ec3d3a2caa07f24ab430c4d4ad0be9e899c0e714b45a649f92e7bbae39317"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.410954 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a28ec3d3a2caa07f24ab430c4d4ad0be9e899c0e714b45a649f92e7bbae39317" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.417335 4772 generic.go:334] "Generic (PLEG): container finished" podID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerID="deb156a613d4b361e96cb60957d663ef36ea4eb59d8168309e1f3c8cbbf8914f" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.417432 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f797948bc-dk5pr" event={"ID":"027dc32b-06dd-45bf-9aad-8e0c92b44a2b","Type":"ContainerDied","Data":"deb156a613d4b361e96cb60957d663ef36ea4eb59d8168309e1f3c8cbbf8914f"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.442793 4772 generic.go:334] "Generic (PLEG): container finished" podID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerID="791a1dda016da276f7e60912d835e500d93685bd5cc31d54d2b396d39fcc8af1" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.442847 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" event={"ID":"1c994b4f-e182-481a-a3ba-17dc9656c70c","Type":"ContainerDied","Data":"791a1dda016da276f7e60912d835e500d93685bd5cc31d54d2b396d39fcc8af1"} Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.494221 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37 is running failed: container process not found" containerID="2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.497552 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37 is running failed: container process not found" containerID="2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.498743 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37 is running failed: container process not found" containerID="2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.498784 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="ovsdbserver-sb" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.499870 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"d6a9ec495836f1834c42245ba492aa4e9ebb76dac50305dd8790c68f739b9277"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502104 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="d6a9ec495836f1834c42245ba492aa4e9ebb76dac50305dd8790c68f739b9277" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502302 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="bbe601a3871553a3d7d70b6b470ceebc522ddd2ed4d9823f1a644a888ecc03ff" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502311 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="5b75dbfffe8e1c9d76456ff93f1c9c0f4bf16b63767e3e50ca83e8220c802021" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502318 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="20bfb81f40cbc7f6e93279c153402c5ff3a9a24099eb68f5d5aa22251bdfd63e" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502331 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="2aa38c5be9a613db8f9237edd2803eb9c3a6730dcaecfe363e7fdf603819c43c" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502338 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="4bb4a8713445b470473fcae6ce67f357ca8b02ad81def05ee9cb94ebea5ebf50" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502345 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="04dbfb695ea067220096d958b7c9f722332cd1273836354417eb1f3cad0efc67" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502353 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="dd0dd2fc38d88b49b28baf152ee2dced29cbe3336d9498d4ade9ee3c9adf12ee" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502359 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="71199f24f24db6b2a98a516ab206a62b13cd49e1da1c3a11e4c911c568e4f32b" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502366 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="d99d87346c3370f3f3fba6fb00e3db55bfc3eab865e86ac6c50b67b5247c7837" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502373 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="6aeb1397a245f1d928583c71247392a935b287c4678959e266ee42e4285547dd" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502383 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="e643a9463572f69ee79ae91f043796a6db50894ddf2c847ea4435b5d3f1f8d4b" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502389 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="05273857f1de6f10d05b451d131edb562b0d717aa0fb09e91569b386fad68432" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502397 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="6c148c107a290e32467067529f8845e2cb396a10c79f269871d8b6dfe85c8538" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502869 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"bbe601a3871553a3d7d70b6b470ceebc522ddd2ed4d9823f1a644a888ecc03ff"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"5b75dbfffe8e1c9d76456ff93f1c9c0f4bf16b63767e3e50ca83e8220c802021"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.502994 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"20bfb81f40cbc7f6e93279c153402c5ff3a9a24099eb68f5d5aa22251bdfd63e"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503009 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"2aa38c5be9a613db8f9237edd2803eb9c3a6730dcaecfe363e7fdf603819c43c"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503020 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"4bb4a8713445b470473fcae6ce67f357ca8b02ad81def05ee9cb94ebea5ebf50"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503031 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"04dbfb695ea067220096d958b7c9f722332cd1273836354417eb1f3cad0efc67"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503189 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"dd0dd2fc38d88b49b28baf152ee2dced29cbe3336d9498d4ade9ee3c9adf12ee"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503232 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"71199f24f24db6b2a98a516ab206a62b13cd49e1da1c3a11e4c911c568e4f32b"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503249 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"d99d87346c3370f3f3fba6fb00e3db55bfc3eab865e86ac6c50b67b5247c7837"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503260 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"6aeb1397a245f1d928583c71247392a935b287c4678959e266ee42e4285547dd"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503273 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"e643a9463572f69ee79ae91f043796a6db50894ddf2c847ea4435b5d3f1f8d4b"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503284 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"05273857f1de6f10d05b451d131edb562b0d717aa0fb09e91569b386fad68432"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.503321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"6c148c107a290e32467067529f8845e2cb396a10c79f269871d8b6dfe85c8538"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.514365 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_62770fd6-1000-4477-ac95-7a4eaa489732/ovsdbserver-sb/0.log" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.514454 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"62770fd6-1000-4477-ac95-7a4eaa489732","Type":"ContainerDied","Data":"09491c40dac74dd8b3e4ad14b3220ce685e25da19709537bae0eb2fa8ee9cebb"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.514535 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09491c40dac74dd8b3e4ad14b3220ce685e25da19709537bae0eb2fa8ee9cebb" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.519397 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder2d34-account-delete-7qhqb"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.523002 4772 generic.go:334] "Generic (PLEG): container finished" podID="14ed2945-ef18-49de-9c18-679e011d3df5" containerID="88ecd0459f0ac9488f0cd3eb8c402462803c773cf6ef7940b9aa2db2abf09dea" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.523080 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14ed2945-ef18-49de-9c18-679e011d3df5","Type":"ContainerDied","Data":"88ecd0459f0ac9488f0cd3eb8c402462803c773cf6ef7940b9aa2db2abf09dea"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.529253 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi0c20-account-delete-g5lnb"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.531195 4772 generic.go:334] "Generic (PLEG): container finished" podID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerID="89b92e0a1e681be8f4f78a508d0ebcba29af7864b3c2db95e3d23d573dc85c86" exitCode=0 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.531263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c79f8b65-qn7q9" event={"ID":"865ca651-4e53-4ac9-946d-31c1e485d91d","Type":"ContainerDied","Data":"89b92e0a1e681be8f4f78a508d0ebcba29af7864b3c2db95e3d23d573dc85c86"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.537748 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05903-account-delete-c7857"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.538649 4772 generic.go:334] "Generic (PLEG): container finished" podID="a4e681ba-088a-41b1-9b89-8bac928038e5" containerID="94c9532e47a3e8f2deba93d357f982767f3bc9fd612be2d3ed8cd1f182488992" exitCode=137 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.538749 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64bec24f572b153df8531886ba9a03f64e4b68cce9b1ba8c4457ff097024b967" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.547936 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glancee71c-account-delete-64r2z"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.559828 4772 generic.go:334] "Generic (PLEG): container finished" podID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerID="d2ff64d96dfba7abbcacbdddbc68b2ab55e205bcc9422fdf3d5388dd6cf5273f" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.559897 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50","Type":"ContainerDied","Data":"d2ff64d96dfba7abbcacbdddbc68b2ab55e205bcc9422fdf3d5388dd6cf5273f"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.578364 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-57b6cb6667-w95sj" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": dial tcp 10.217.0.166:8080: connect: connection refused" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.578464 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-57b6cb6667-w95sj" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": dial tcp 10.217.0.166:8080: connect: connection refused" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.639683 4772 generic.go:334] "Generic (PLEG): container finished" podID="b8b92f55-36d8-4358-9b57-734762f225c4" containerID="214c2acb33ea5a782af6be55ddcf02954762b130146c3e714c36840852bfafb4" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.639980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b92f55-36d8-4358-9b57-734762f225c4","Type":"ContainerDied","Data":"214c2acb33ea5a782af6be55ddcf02954762b130146c3e714c36840852bfafb4"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.656335 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4ce40448-07b1-492e-bb7c-48aaf2bb3ce9/ovsdbserver-nb/0.log" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.656384 4772 generic.go:334] "Generic (PLEG): container finished" podID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerID="c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94" exitCode=143 Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.656414 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9","Type":"ContainerDied","Data":"c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.656440 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9","Type":"ContainerDied","Data":"e9a91d5e5afc84cce47e4a33ec4add6c8a0eaf6d89cea22ece11f3feb6d26602"} Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.656450 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a91d5e5afc84cce47e4a33ec4add6c8a0eaf6d89cea22ece11f3feb6d26602" Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.713261 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerName="rabbitmq" containerID="cri-o://96e2450010f46499b0808158113b617f9b05995cffcb394c9f26383aeac1a85f" gracePeriod=604800 Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.851209 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.853316 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican3436-account-delete-4w4qv"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.856386 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.856672 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data podName:5ce19f6b-73e1-48b9-810a-f9d97a14fe7b nodeName:}" failed. No retries permitted until 2025-11-22 11:03:22.856651351 +0000 UTC m=+1523.096095845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.874114 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:03:18 crc kubenswrapper[4772]: I1122 11:03:18.882924 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron45e9-account-delete-9pn28"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.895994 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eaf9da0_a00f_4251_ae11_31ccc3e237e1.slice/crio-b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec4f25b4_f811_4d5a_8d3e_00e8adbd5bc7.slice/crio-b95964ad218161628ba9d6df7f28f6d82327565a7c20500e4082e1b8d7b0c9c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-conmon-d6a9ec495836f1834c42245ba492aa4e9ebb76dac50305dd8790c68f739b9277.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-conmon-05273857f1de6f10d05b451d131edb562b0d717aa0fb09e91569b386fad68432.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-dd0dd2fc38d88b49b28baf152ee2dced29cbe3336d9498d4ade9ee3c9adf12ee.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ce40448_07b1_492e_bb7c_48aaf2bb3ce9.slice/crio-conmon-c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-conmon-2aa38c5be9a613db8f9237edd2803eb9c3a6730dcaecfe363e7fdf603819c43c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62770fd6_1000_4477_ac95_7a4eaa489732.slice/crio-conmon-ccf4d7895ba000a2440b35a71263cfe3dbaecea5399c4efbf9b39db555947784.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-20bfb81f40cbc7f6e93279c153402c5ff3a9a24099eb68f5d5aa22251bdfd63e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-5b75dbfffe8e1c9d76456ff93f1c9c0f4bf16b63767e3e50ca83e8220c802021.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eaf9da0_a00f_4251_ae11_31ccc3e237e1.slice/crio-conmon-b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-71199f24f24db6b2a98a516ab206a62b13cd49e1da1c3a11e4c911c568e4f32b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17f0d5ca_99e5_47c6_9fdf_1932956cff3e.slice/crio-conmon-41caed95f9f668a055e34288ae91bcce5a6f3ea58f05250f45efb84f1f1c0fbf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d574ce_36bc_461c_a85a_738b71392ed6.slice/crio-508aa44be1af6ce7429f5cfe8151bfa33738cc58c627ec663c79b706c344ddb4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56fb678a_814f_4328_8b49_9226512bf10e.slice/crio-ab42698c086434daa53f114239b03f9f09d42a90f526e9da455ca5fa44319783.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e52a7_830a_43a1_ad15_a13fe2a07222.slice/crio-e643a9463572f69ee79ae91f043796a6db50894ddf2c847ea4435b5d3f1f8d4b.scope\": RecentStats: unable to find data in memory cache]" Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.930212 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.930284 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="51f59313-1e0d-4877-9141-c32a7f72f84f" containerName="nova-scheduler-scheduler" Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.935867 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.943690 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.962503 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 11:03:18 crc kubenswrapper[4772]: E1122 11:03:18.962577 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerName="galera" Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.003505 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.026650 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.028330 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.028453 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="ovn-northd" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.374109 4772 scope.go:117] "RemoveContainer" containerID="35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca" Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.374568 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca\": container with ID starting with 35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca not found: ID does not exist" containerID="35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.374687 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca"} err="failed to get container status \"35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca\": rpc error: code = NotFound desc = could not find container \"35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca\": container with ID starting with 35b6749f76efa9aad573074ef76caffc7e6b722245c95ae6b0c92a029ed2f9ca not found: ID does not exist" Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.418506 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.418998 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.419300 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.419332 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.419954 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.430315 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.436543 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.436632 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.472613 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fb39191-1ec6-4ea4-84d4-8c4dc36f1031" path="/var/lib/kubelet/pods/0fb39191-1ec6-4ea4-84d4-8c4dc36f1031/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.475148 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8ec99a-e207-4183-b904-60921f754abf" path="/var/lib/kubelet/pods/3c8ec99a-e207-4183-b904-60921f754abf/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.475866 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f52d3ae-6917-4b9a-9f1e-533a35e47aaf" path="/var/lib/kubelet/pods/3f52d3ae-6917-4b9a-9f1e-533a35e47aaf/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.476589 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cba9f3-9cc7-4000-a0f7-76159920d53a" path="/var/lib/kubelet/pods/50cba9f3-9cc7-4000-a0f7-76159920d53a/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.477770 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6851a02e-cce0-44b6-af9a-f6ab928ddbe1" path="/var/lib/kubelet/pods/6851a02e-cce0-44b6-af9a-f6ab928ddbe1/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.478406 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3d510f-2f1d-4d21-ae11-55ba98067c9e" path="/var/lib/kubelet/pods/7e3d510f-2f1d-4d21-ae11-55ba98067c9e/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.478908 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849e3f53-4dc6-4e20-aa04-0e2a0ae14427" path="/var/lib/kubelet/pods/849e3f53-4dc6-4e20-aa04-0e2a0ae14427/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.479581 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5abd352-83cb-40b7-9f68-c5635c1b5066" path="/var/lib/kubelet/pods/a5abd352-83cb-40b7-9f68-c5635c1b5066/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.480776 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b00c76-f9b7-470b-ba50-4d88c28f02f1" path="/var/lib/kubelet/pods/b1b00c76-f9b7-470b-ba50-4d88c28f02f1/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.481489 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81b0bbf-5434-4a0d-94b1-8d2129c846b8" path="/var/lib/kubelet/pods/c81b0bbf-5434-4a0d-94b1-8d2129c846b8/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.482039 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d583d5fa-33d5-4225-8d1b-2f389e2f35ea" path="/var/lib/kubelet/pods/d583d5fa-33d5-4225-8d1b-2f389e2f35ea/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.482545 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76996e4-c174-46ae-87f6-4e7ee60f7863" path="/var/lib/kubelet/pods/d76996e4-c174-46ae-87f6-4e7ee60f7863/volumes" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.579975 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rh9km_6f8983d0-bcff-45de-b158-351e12a0b0f3/openstack-network-exporter/0.log" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.580065 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rh9km" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.683191 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-metrics-certs-tls-certs\") pod \"6f8983d0-bcff-45de-b158-351e12a0b0f3\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.683357 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8983d0-bcff-45de-b158-351e12a0b0f3-config\") pod \"6f8983d0-bcff-45de-b158-351e12a0b0f3\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.683419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovn-rundir\") pod \"6f8983d0-bcff-45de-b158-351e12a0b0f3\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.683448 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjnc8\" (UniqueName: \"kubernetes.io/projected/6f8983d0-bcff-45de-b158-351e12a0b0f3-kube-api-access-kjnc8\") pod \"6f8983d0-bcff-45de-b158-351e12a0b0f3\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.683486 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-combined-ca-bundle\") pod \"6f8983d0-bcff-45de-b158-351e12a0b0f3\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.683638 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "6f8983d0-bcff-45de-b158-351e12a0b0f3" (UID: "6f8983d0-bcff-45de-b158-351e12a0b0f3"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.684037 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovs-rundir\") pod \"6f8983d0-bcff-45de-b158-351e12a0b0f3\" (UID: \"6f8983d0-bcff-45de-b158-351e12a0b0f3\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.684251 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "6f8983d0-bcff-45de-b158-351e12a0b0f3" (UID: "6f8983d0-bcff-45de-b158-351e12a0b0f3"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.684771 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8983d0-bcff-45de-b158-351e12a0b0f3-config" (OuterVolumeSpecName: "config") pod "6f8983d0-bcff-45de-b158-351e12a0b0f3" (UID: "6f8983d0-bcff-45de-b158-351e12a0b0f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.684957 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8983d0-bcff-45de-b158-351e12a0b0f3-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.684980 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.684994 4772 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6f8983d0-bcff-45de-b158-351e12a0b0f3-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.685406 4772 generic.go:334] "Generic (PLEG): container finished" podID="45d574ce-36bc-461c-a85a-738b71392ed6" containerID="508aa44be1af6ce7429f5cfe8151bfa33738cc58c627ec663c79b706c344ddb4" exitCode=0 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.685429 4772 generic.go:334] "Generic (PLEG): container finished" podID="45d574ce-36bc-461c-a85a-738b71392ed6" containerID="89f3af719d3b34aa755006c4c157b86e9e231adc44922aa49a262366c3fbab3e" exitCode=0 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.685497 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"45d574ce-36bc-461c-a85a-738b71392ed6","Type":"ContainerDied","Data":"508aa44be1af6ce7429f5cfe8151bfa33738cc58c627ec663c79b706c344ddb4"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.685523 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"45d574ce-36bc-461c-a85a-738b71392ed6","Type":"ContainerDied","Data":"89f3af719d3b34aa755006c4c157b86e9e231adc44922aa49a262366c3fbab3e"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.685535 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"45d574ce-36bc-461c-a85a-738b71392ed6","Type":"ContainerDied","Data":"a7bd35683aa493abba935fb75f72561e11aecf2eb922bb1c3c4116705ab119ec"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.685543 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bd35683aa493abba935fb75f72561e11aecf2eb922bb1c3c4116705ab119ec" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.687697 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron45e9-account-delete-9pn28" event={"ID":"941a38a8-56e0-4061-8891-0cd3815477a4","Type":"ContainerStarted","Data":"b71da73b4899febd24e2c5357d90a96d5107fcdd87b03943f84a1fc4a8d49d41"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.689666 4772 generic.go:334] "Generic (PLEG): container finished" podID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerID="dfd79733bb340ac1878b2e319236d06e3fb8878f7900376f4a7fb9aa84b8711a" exitCode=0 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.689777 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57b6cb6667-w95sj" event={"ID":"13c1f859-42ed-484f-88cb-5349a7b64dda","Type":"ContainerDied","Data":"dfd79733bb340ac1878b2e319236d06e3fb8878f7900376f4a7fb9aa84b8711a"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.689856 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57b6cb6667-w95sj" event={"ID":"13c1f859-42ed-484f-88cb-5349a7b64dda","Type":"ContainerDied","Data":"5bd36ebd6bca70b4df2f24c903f872e6d9abb67e84bec4659a1b85154225ff22"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.689921 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd36ebd6bca70b4df2f24c903f872e6d9abb67e84bec4659a1b85154225ff22" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.691338 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8983d0-bcff-45de-b158-351e12a0b0f3-kube-api-access-kjnc8" (OuterVolumeSpecName: "kube-api-access-kjnc8") pod "6f8983d0-bcff-45de-b158-351e12a0b0f3" (UID: "6f8983d0-bcff-45de-b158-351e12a0b0f3"). InnerVolumeSpecName "kube-api-access-kjnc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.692584 4772 generic.go:334] "Generic (PLEG): container finished" podID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerID="e40f637f7b43ff915a3b153426def590c2d29d02ddeac886a688e0d9bf7a29a8" exitCode=0 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.692700 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f797948bc-dk5pr" event={"ID":"027dc32b-06dd-45bf-9aad-8e0c92b44a2b","Type":"ContainerDied","Data":"e40f637f7b43ff915a3b153426def590c2d29d02ddeac886a688e0d9bf7a29a8"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.692774 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f797948bc-dk5pr" event={"ID":"027dc32b-06dd-45bf-9aad-8e0c92b44a2b","Type":"ContainerDied","Data":"32687ed9dbee22e504bb16cb3da9dcafc367491193ac75c34bed15a7cae78a1e"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.692834 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32687ed9dbee22e504bb16cb3da9dcafc367491193ac75c34bed15a7cae78a1e" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.694800 4772 generic.go:334] "Generic (PLEG): container finished" podID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerID="844e023e7c3fccc854525c2a694623fa1a3482bbdd36a977a83ba8eb6cf3ab4b" exitCode=0 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.694958 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" event={"ID":"1c994b4f-e182-481a-a3ba-17dc9656c70c","Type":"ContainerDied","Data":"844e023e7c3fccc854525c2a694623fa1a3482bbdd36a977a83ba8eb6cf3ab4b"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.695133 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" event={"ID":"1c994b4f-e182-481a-a3ba-17dc9656c70c","Type":"ContainerDied","Data":"daeca1907ee532d9e66ed188e4355f4840d194cba884c816d09e13c98ee51293"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.695225 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daeca1907ee532d9e66ed188e4355f4840d194cba884c816d09e13c98ee51293" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.702963 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican3436-account-delete-4w4qv" event={"ID":"cb612c10-4436-4c79-b990-cbc7b403eed5","Type":"ContainerStarted","Data":"49263e5e29c7946d5bdeb6bfdf5992208357e642c0c3494147ca9a5754236ea4"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.705251 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancee71c-account-delete-64r2z" event={"ID":"706b8e5a-87b8-429e-aea7-e7e5f161182f","Type":"ContainerStarted","Data":"a4f65283c36fd34461435fb05fb618636388babba3099fd1c115d101d9ef2ccb"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.708678 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05903-account-delete-c7857" event={"ID":"7d122410-121a-47cd-9465-e5c6f85cf2b2","Type":"ContainerStarted","Data":"5ef68a720c9d2658b5c899583f8c1082387b678b82271a393a16dadff1cd9d3c"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.710428 4772 generic.go:334] "Generic (PLEG): container finished" podID="86139aa9-cd30-4d97-833e-a26562aebf92" containerID="b5adee60fbbe04c2b0f6e677dd915852ef48874363c5ddda10829f388b38decb" exitCode=143 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.710519 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6876658948-bzr5z" event={"ID":"86139aa9-cd30-4d97-833e-a26562aebf92","Type":"ContainerDied","Data":"b5adee60fbbe04c2b0f6e677dd915852ef48874363c5ddda10829f388b38decb"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.712797 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi0c20-account-delete-g5lnb" event={"ID":"9aeb3608-353b-4b44-8797-46affdc587a7","Type":"ContainerStarted","Data":"25d10aa38ba618bf1da0a67f0b55cf1c1dd9aae8cfe0a5a83b1c79f39efb8e3a"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.719149 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c4828519-a6ad-4851-b9c2-134a12f373ac","Type":"ContainerDied","Data":"98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.719095 4772 generic.go:334] "Generic (PLEG): container finished" podID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerID="98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b" exitCode=0 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.719324 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c4828519-a6ad-4851-b9c2-134a12f373ac","Type":"ContainerDied","Data":"86d874a41642369dce4dd58887083ba2b388bcb17f891ead4338f7e5766be95a"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.719342 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d874a41642369dce4dd58887083ba2b388bcb17f891ead4338f7e5766be95a" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.724424 4772 generic.go:334] "Generic (PLEG): container finished" podID="5fbd4e9d-9635-462d-abba-763daf0da369" containerID="38b682e02083c87881ca0840c5c8512286c619c93d70f45cc0f97311ece2d602" exitCode=0 Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.724503 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5fbd4e9d-9635-462d-abba-763daf0da369","Type":"ContainerDied","Data":"38b682e02083c87881ca0840c5c8512286c619c93d70f45cc0f97311ece2d602"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.724549 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5fbd4e9d-9635-462d-abba-763daf0da369","Type":"ContainerDied","Data":"b2026cf4626b6475bc4df4f61b59a362897ae9c87c8c3483f94afec6886981b9"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.724563 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2026cf4626b6475bc4df4f61b59a362897ae9c87c8c3483f94afec6886981b9" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.727436 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rh9km" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.727448 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder2d34-account-delete-7qhqb" event={"ID":"30007403-085b-4874-88b7-8b27426fd4f7","Type":"ContainerStarted","Data":"389d71be2ab45daef9f146b117f3ae303e117cc7f3b03f21be0899adc4dc525c"} Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.757991 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.793765 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjnc8\" (UniqueName: \"kubernetes.io/projected/6f8983d0-bcff-45de-b158-351e12a0b0f3-kube-api-access-kjnc8\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.832313 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "6f8983d0-bcff-45de-b158-351e12a0b0f3" (UID: "6f8983d0-bcff-45de-b158-351e12a0b0f3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.836740 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f8983d0-bcff-45de-b158-351e12a0b0f3" (UID: "6f8983d0-bcff-45de-b158-351e12a0b0f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.894946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-svc\") pod \"464d950a-e1bb-4efb-afdf-37b97a62a42c\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.895020 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-swift-storage-0\") pod \"464d950a-e1bb-4efb-afdf-37b97a62a42c\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.895208 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxzxk\" (UniqueName: \"kubernetes.io/projected/464d950a-e1bb-4efb-afdf-37b97a62a42c-kube-api-access-kxzxk\") pod \"464d950a-e1bb-4efb-afdf-37b97a62a42c\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.895242 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-sb\") pod \"464d950a-e1bb-4efb-afdf-37b97a62a42c\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.895282 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-config\") pod \"464d950a-e1bb-4efb-afdf-37b97a62a42c\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.895316 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-nb\") pod \"464d950a-e1bb-4efb-afdf-37b97a62a42c\" (UID: \"464d950a-e1bb-4efb-afdf-37b97a62a42c\") " Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.895810 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.895824 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8983d0-bcff-45de-b158-351e12a0b0f3-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.895887 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 11:03:19 crc kubenswrapper[4772]: E1122 11:03:19.895933 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data podName:468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea nodeName:}" failed. No retries permitted until 2025-11-22 11:03:23.895918564 +0000 UTC m=+1524.135363058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data") pod "rabbitmq-server-0" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea") : configmap "rabbitmq-config-data" not found Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.914027 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464d950a-e1bb-4efb-afdf-37b97a62a42c-kube-api-access-kxzxk" (OuterVolumeSpecName: "kube-api-access-kxzxk") pod "464d950a-e1bb-4efb-afdf-37b97a62a42c" (UID: "464d950a-e1bb-4efb-afdf-37b97a62a42c"). InnerVolumeSpecName "kube-api-access-kxzxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.959333 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "464d950a-e1bb-4efb-afdf-37b97a62a42c" (UID: "464d950a-e1bb-4efb-afdf-37b97a62a42c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.985252 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "464d950a-e1bb-4efb-afdf-37b97a62a42c" (UID: "464d950a-e1bb-4efb-afdf-37b97a62a42c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.989289 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "464d950a-e1bb-4efb-afdf-37b97a62a42c" (UID: "464d950a-e1bb-4efb-afdf-37b97a62a42c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.999355 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxzxk\" (UniqueName: \"kubernetes.io/projected/464d950a-e1bb-4efb-afdf-37b97a62a42c-kube-api-access-kxzxk\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.999798 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.999893 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:19 crc kubenswrapper[4772]: I1122 11:03:19.999948 4772 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: E1122 11:03:20.043618 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94 is running failed: container process not found" containerID="c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.045783 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "464d950a-e1bb-4efb-afdf-37b97a62a42c" (UID: "464d950a-e1bb-4efb-afdf-37b97a62a42c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: E1122 11:03:20.049787 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94 is running failed: container process not found" containerID="c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 11:03:20 crc kubenswrapper[4772]: E1122 11:03:20.051599 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94 is running failed: container process not found" containerID="c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 11:03:20 crc kubenswrapper[4772]: E1122 11:03:20.051957 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="ovsdbserver-nb" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.113702 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.114018 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-central-agent" containerID="cri-o://10ca89615b0daee19e2ff35e6822a5cb0d8fad7528fc2fae6ffe6155cd72db74" gracePeriod=30 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.114171 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="proxy-httpd" containerID="cri-o://eb806120517e41fc28276787644b8c800bdb795e19ae53859d7279e00db21cb7" gracePeriod=30 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.114219 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="sg-core" containerID="cri-o://5e0e21415f001e21917dd3a32819ab9cb5db83f0c62abce63b6c361f7db09c0d" gracePeriod=30 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.114260 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-notification-agent" containerID="cri-o://f9a0eee7dffa91a6e45f6416ea3a94139da8a50ce7e232bc83c7eb4c3723250c" gracePeriod=30 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.122767 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.149101 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.157393 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ddb4a824-3a8a-4287-b206-94832099e15b" containerName="kube-state-metrics" containerID="cri-o://0b9ac1c20deb88561ffa4d4aa22d3d7b705dfba4a955494934d74483e7d263d6" gracePeriod=30 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.186810 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.192070 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rh9km"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.193773 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_62770fd6-1000-4477-ac95-7a4eaa489732/ovsdbserver-sb/0.log" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.193930 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.206742 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4ce40448-07b1-492e-bb7c-48aaf2bb3ce9/ovsdbserver-nb/0.log" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.206848 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.218244 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-rh9km"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224008 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224283 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdb-rundir\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224329 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-config\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224378 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdb-rundir\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224413 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config\") pod \"a4e681ba-088a-41b1-9b89-8bac928038e5\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224442 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-config\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224466 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-scripts\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224500 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdbserver-nb-tls-certs\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224536 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-metrics-certs-tls-certs\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224560 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-combined-ca-bundle\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224599 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d52kl\" (UniqueName: \"kubernetes.io/projected/62770fd6-1000-4477-ac95-7a4eaa489732-kube-api-access-d52kl\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224627 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config-secret\") pod \"a4e681ba-088a-41b1-9b89-8bac928038e5\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224663 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdbserver-sb-tls-certs\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224690 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-combined-ca-bundle\") pod \"a4e681ba-088a-41b1-9b89-8bac928038e5\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224724 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224750 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdncq\" (UniqueName: \"kubernetes.io/projected/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-kube-api-access-cdncq\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224771 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-metrics-certs-tls-certs\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-combined-ca-bundle\") pod \"62770fd6-1000-4477-ac95-7a4eaa489732\" (UID: \"62770fd6-1000-4477-ac95-7a4eaa489732\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224871 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5lvr\" (UniqueName: \"kubernetes.io/projected/a4e681ba-088a-41b1-9b89-8bac928038e5-kube-api-access-b5lvr\") pod \"a4e681ba-088a-41b1-9b89-8bac928038e5\" (UID: \"a4e681ba-088a-41b1-9b89-8bac928038e5\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.224908 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-scripts\") pod \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\" (UID: \"4ce40448-07b1-492e-bb7c-48aaf2bb3ce9\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.228895 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-scripts" (OuterVolumeSpecName: "scripts") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.229501 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.241077 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.245775 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-config" (OuterVolumeSpecName: "config") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.248972 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7x5tj"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.249433 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-config" (OuterVolumeSpecName: "config") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.249598 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.258927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-config" (OuterVolumeSpecName: "config") pod "464d950a-e1bb-4efb-afdf-37b97a62a42c" (UID: "464d950a-e1bb-4efb-afdf-37b97a62a42c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.263173 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7x5tj"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.270504 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-scripts" (OuterVolumeSpecName: "scripts") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.276014 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e71c-account-create-kr4nm"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.285784 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glancee71c-account-delete-64r2z"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.289945 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-kube-api-access-cdncq" (OuterVolumeSpecName: "kube-api-access-cdncq") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "kube-api-access-cdncq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.292024 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62770fd6-1000-4477-ac95-7a4eaa489732-kube-api-access-d52kl" (OuterVolumeSpecName: "kube-api-access-d52kl") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "kube-api-access-d52kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.294848 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e71c-account-create-kr4nm"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.318585 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e681ba-088a-41b1-9b89-8bac928038e5-kube-api-access-b5lvr" (OuterVolumeSpecName: "kube-api-access-b5lvr") pod "a4e681ba-088a-41b1-9b89-8bac928038e5" (UID: "a4e681ba-088a-41b1-9b89-8bac928038e5"). InnerVolumeSpecName "kube-api-access-b5lvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.318588 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.324863 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330756 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330781 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330793 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330802 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330812 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330820 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62770fd6-1000-4477-ac95-7a4eaa489732-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330828 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d52kl\" (UniqueName: \"kubernetes.io/projected/62770fd6-1000-4477-ac95-7a4eaa489732-kube-api-access-d52kl\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330842 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330850 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdncq\" (UniqueName: \"kubernetes.io/projected/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-kube-api-access-cdncq\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330861 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464d950a-e1bb-4efb-afdf-37b97a62a42c-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.330869 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5lvr\" (UniqueName: \"kubernetes.io/projected/a4e681ba-088a-41b1-9b89-8bac928038e5-kube-api-access-b5lvr\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.342797 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.354482 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.369100 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jr9dl"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.371388 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.380810 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jr9dl"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.383881 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.390670 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c20-account-create-g4m74"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.405930 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.410002 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.427765 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0c20-account-create-g4m74"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.432254 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-config-data\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.432396 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-etc-swift\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.432472 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-log-httpd\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.432576 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-config-data\") pod \"5fbd4e9d-9635-462d-abba-763daf0da369\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.432664 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data\") pod \"1c994b4f-e182-481a-a3ba-17dc9656c70c\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434362 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-default\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data\") pod \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434533 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-secrets\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434634 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data-custom\") pod \"45d574ce-36bc-461c-a85a-738b71392ed6\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434719 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-combined-ca-bundle\") pod \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434816 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-galera-tls-certs\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434895 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-combined-ca-bundle\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.434974 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2265k\" (UniqueName: \"kubernetes.io/projected/c4828519-a6ad-4851-b9c2-134a12f373ac-kube-api-access-2265k\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435069 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data-custom\") pod \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435147 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data\") pod \"45d574ce-36bc-461c-a85a-738b71392ed6\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435265 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-generated\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435349 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2zlx\" (UniqueName: \"kubernetes.io/projected/1c994b4f-e182-481a-a3ba-17dc9656c70c-kube-api-access-q2zlx\") pod \"1c994b4f-e182-481a-a3ba-17dc9656c70c\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxlr4\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-kube-api-access-qxlr4\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435491 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-combined-ca-bundle\") pod \"1c994b4f-e182-481a-a3ba-17dc9656c70c\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435560 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45d574ce-36bc-461c-a85a-738b71392ed6-etc-machine-id\") pod \"45d574ce-36bc-461c-a85a-738b71392ed6\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435632 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n45wg\" (UniqueName: \"kubernetes.io/projected/45d574ce-36bc-461c-a85a-738b71392ed6-kube-api-access-n45wg\") pod \"45d574ce-36bc-461c-a85a-738b71392ed6\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435695 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435760 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-combined-ca-bundle\") pod \"5fbd4e9d-9635-462d-abba-763daf0da369\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435826 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq4mt\" (UniqueName: \"kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt\") pod \"5fbd4e9d-9635-462d-abba-763daf0da369\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.435913 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-kolla-config\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.436007 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-operator-scripts\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.436089 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-scripts\") pod \"45d574ce-36bc-461c-a85a-738b71392ed6\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.440655 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.436229 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-combined-ca-bundle\") pod \"45d574ce-36bc-461c-a85a-738b71392ed6\" (UID: \"45d574ce-36bc-461c-a85a-738b71392ed6\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447025 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-vencrypt-tls-certs\") pod \"5fbd4e9d-9635-462d-abba-763daf0da369\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447083 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-public-tls-certs\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447109 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data-custom\") pod \"1c994b4f-e182-481a-a3ba-17dc9656c70c\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447267 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-run-httpd\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447305 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-internal-tls-certs\") pod \"13c1f859-42ed-484f-88cb-5349a7b64dda\" (UID: \"13c1f859-42ed-484f-88cb-5349a7b64dda\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447326 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-logs\") pod \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447361 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-combined-ca-bundle\") pod \"c4828519-a6ad-4851-b9c2-134a12f373ac\" (UID: \"c4828519-a6ad-4851-b9c2-134a12f373ac\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.447391 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-nova-novncproxy-tls-certs\") pod \"5fbd4e9d-9635-462d-abba-763daf0da369\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.449327 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c994b4f-e182-481a-a3ba-17dc9656c70c-logs\") pod \"1c994b4f-e182-481a-a3ba-17dc9656c70c\" (UID: \"1c994b4f-e182-481a-a3ba-17dc9656c70c\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.449579 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf9sn\" (UniqueName: \"kubernetes.io/projected/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-kube-api-access-wf9sn\") pod \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\" (UID: \"027dc32b-06dd-45bf-9aad-8e0c92b44a2b\") " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.450869 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.452131 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45d574ce-36bc-461c-a85a-738b71392ed6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "45d574ce-36bc-461c-a85a-738b71392ed6" (UID: "45d574ce-36bc-461c-a85a-738b71392ed6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.456646 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.461988 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi0c20-account-delete-g5lnb"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.462441 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-logs" (OuterVolumeSpecName: "logs") pod "027dc32b-06dd-45bf-9aad-8e0c92b44a2b" (UID: "027dc32b-06dd-45bf-9aad-8e0c92b44a2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.468891 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.475985 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.488480 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.488542 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.488825 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c994b4f-e182-481a-a3ba-17dc9656c70c-logs" (OuterVolumeSpecName: "logs") pod "1c994b4f-e182-481a-a3ba-17dc9656c70c" (UID: "1c994b4f-e182-481a-a3ba-17dc9656c70c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.510213 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.525140 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-kube-api-access-wf9sn" (OuterVolumeSpecName: "kube-api-access-wf9sn") pod "027dc32b-06dd-45bf-9aad-8e0c92b44a2b" (UID: "027dc32b-06dd-45bf-9aad-8e0c92b44a2b"). InnerVolumeSpecName "kube-api-access-wf9sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.548513 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c994b4f-e182-481a-a3ba-17dc9656c70c-kube-api-access-q2zlx" (OuterVolumeSpecName: "kube-api-access-q2zlx") pod "1c994b4f-e182-481a-a3ba-17dc9656c70c" (UID: "1c994b4f-e182-481a-a3ba-17dc9656c70c"). InnerVolumeSpecName "kube-api-access-q2zlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.558283 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt" (OuterVolumeSpecName: "kube-api-access-vq4mt") pod "5fbd4e9d-9635-462d-abba-763daf0da369" (UID: "5fbd4e9d-9635-462d-abba-763daf0da369"). InnerVolumeSpecName "kube-api-access-vq4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.560351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq4mt\" (UniqueName: \"kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt\") pod \"5fbd4e9d-9635-462d-abba-763daf0da369\" (UID: \"5fbd4e9d-9635-462d-abba-763daf0da369\") " Nov 22 11:03:20 crc kubenswrapper[4772]: W1122 11:03:20.560564 4772 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5fbd4e9d-9635-462d-abba-763daf0da369/volumes/kubernetes.io~projected/kube-api-access-vq4mt Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.560628 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt" (OuterVolumeSpecName: "kube-api-access-vq4mt") pod "5fbd4e9d-9635-462d-abba-763daf0da369" (UID: "5fbd4e9d-9635-462d-abba-763daf0da369"). InnerVolumeSpecName "kube-api-access-vq4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561086 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561108 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561119 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c994b4f-e182-481a-a3ba-17dc9656c70c-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561218 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf9sn\" (UniqueName: \"kubernetes.io/projected/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-kube-api-access-wf9sn\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561244 4772 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561254 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13c1f859-42ed-484f-88cb-5349a7b64dda-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561267 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561280 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4828519-a6ad-4851-b9c2-134a12f373ac-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561291 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2zlx\" (UniqueName: \"kubernetes.io/projected/1c994b4f-e182-481a-a3ba-17dc9656c70c-kube-api-access-q2zlx\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561301 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/45d574ce-36bc-461c-a85a-738b71392ed6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561311 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq4mt\" (UniqueName: \"kubernetes.io/projected/5fbd4e9d-9635-462d-abba-763daf0da369-kube-api-access-vq4mt\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.561322 4772 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4828519-a6ad-4851-b9c2-134a12f373ac-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.585880 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d574ce-36bc-461c-a85a-738b71392ed6-kube-api-access-n45wg" (OuterVolumeSpecName: "kube-api-access-n45wg") pod "45d574ce-36bc-461c-a85a-738b71392ed6" (UID: "45d574ce-36bc-461c-a85a-738b71392ed6"). InnerVolumeSpecName "kube-api-access-n45wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.586023 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1c994b4f-e182-481a-a3ba-17dc9656c70c" (UID: "1c994b4f-e182-481a-a3ba-17dc9656c70c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.593383 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "45d574ce-36bc-461c-a85a-738b71392ed6" (UID: "45d574ce-36bc-461c-a85a-738b71392ed6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.585668 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-scripts" (OuterVolumeSpecName: "scripts") pod "45d574ce-36bc-461c-a85a-738b71392ed6" (UID: "45d574ce-36bc-461c-a85a-738b71392ed6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.596303 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-secrets" (OuterVolumeSpecName: "secrets") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.596923 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "027dc32b-06dd-45bf-9aad-8e0c92b44a2b" (UID: "027dc32b-06dd-45bf-9aad-8e0c92b44a2b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.597116 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-kube-api-access-qxlr4" (OuterVolumeSpecName: "kube-api-access-qxlr4") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "kube-api-access-qxlr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.597189 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4828519-a6ad-4851-b9c2-134a12f373ac-kube-api-access-2265k" (OuterVolumeSpecName: "kube-api-access-2265k") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "kube-api-access-2265k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.617827 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.662763 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxlr4\" (UniqueName: \"kubernetes.io/projected/13c1f859-42ed-484f-88cb-5349a7b64dda-kube-api-access-qxlr4\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.662878 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n45wg\" (UniqueName: \"kubernetes.io/projected/45d574ce-36bc-461c-a85a-738b71392ed6-kube-api-access-n45wg\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.662945 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.662998 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.663062 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.663117 4772 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.663170 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.663250 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2265k\" (UniqueName: \"kubernetes.io/projected/c4828519-a6ad-4851-b9c2-134a12f373ac-kube-api-access-2265k\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.663303 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.761229 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.764866 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.768345 4772 generic.go:334] "Generic (PLEG): container finished" podID="ddb4a824-3a8a-4287-b206-94832099e15b" containerID="0b9ac1c20deb88561ffa4d4aa22d3d7b705dfba4a955494934d74483e7d263d6" exitCode=2 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.768434 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddb4a824-3a8a-4287-b206-94832099e15b","Type":"ContainerDied","Data":"0b9ac1c20deb88561ffa4d4aa22d3d7b705dfba4a955494934d74483e7d263d6"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.789143 4772 generic.go:334] "Generic (PLEG): container finished" podID="14ed2945-ef18-49de-9c18-679e011d3df5" containerID="dd542af28bce5c278e708a047b0757d9812c5e11e9dc0dff83889ad014c4b497" exitCode=0 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.789675 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14ed2945-ef18-49de-9c18-679e011d3df5","Type":"ContainerDied","Data":"dd542af28bce5c278e708a047b0757d9812c5e11e9dc0dff83889ad014c4b497"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.793945 4772 generic.go:334] "Generic (PLEG): container finished" podID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerID="59510dd7b9579831eb08695d39ab17e9efd8ec1346988dbb2e6af8437f7ad097" exitCode=0 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.794029 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50","Type":"ContainerDied","Data":"59510dd7b9579831eb08695d39ab17e9efd8ec1346988dbb2e6af8437f7ad097"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.802835 4772 generic.go:334] "Generic (PLEG): container finished" podID="b8b92f55-36d8-4358-9b57-734762f225c4" containerID="241a8eaefd8f667894261b22a62dc20eac70a2ffa0b3309654a9b9bcc88514de" exitCode=0 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.802947 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b92f55-36d8-4358-9b57-734762f225c4","Type":"ContainerDied","Data":"241a8eaefd8f667894261b22a62dc20eac70a2ffa0b3309654a9b9bcc88514de"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.805495 4772 generic.go:334] "Generic (PLEG): container finished" podID="33b26633-94ac-4439-b1ab-ab225d2e562b" containerID="08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9" exitCode=0 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.805600 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"33b26633-94ac-4439-b1ab-ab225d2e562b","Type":"ContainerDied","Data":"08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.809128 4772 generic.go:334] "Generic (PLEG): container finished" podID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerID="c3ff3bf5f8075eb82c99bb11cb292366102815d9d517ae626765713d727efaff" exitCode=0 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.809193 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd786c776-rmj8k" event={"ID":"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7","Type":"ContainerDied","Data":"c3ff3bf5f8075eb82c99bb11cb292366102815d9d517ae626765713d727efaff"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.822764 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerID="eb806120517e41fc28276787644b8c800bdb795e19ae53859d7279e00db21cb7" exitCode=0 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.822802 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerID="5e0e21415f001e21917dd3a32819ab9cb5db83f0c62abce63b6c361f7db09c0d" exitCode=2 Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.822902 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57b6cb6667-w95sj" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.823618 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.823881 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerDied","Data":"eb806120517e41fc28276787644b8c800bdb795e19ae53859d7279e00db21cb7"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824244 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerDied","Data":"5e0e21415f001e21917dd3a32819ab9cb5db83f0c62abce63b6c361f7db09c0d"} Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824206 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-7f4sv" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824286 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824313 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824828 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-79459755b6-xsvzh" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824868 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824898 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.824927 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f797948bc-dk5pr" Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.984644 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 11:03:20 crc kubenswrapper[4772]: I1122 11:03:20.984868 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="3fbd9ebd-2c62-4336-9946-792e4b3c83db" containerName="memcached" containerID="cri-o://98e7f008c22226185b9fc4da3d836b571692bf43deee9c747a6c68730a3601a5" gracePeriod=30 Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.059860 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hqp6h"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.089819 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hqp6h"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.136470 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xnbhf"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.174943 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xnbhf"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228126 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone7a1c-account-delete-b8xp2"] Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228623 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-httpd" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228644 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-httpd" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228659 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="probe" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228666 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="probe" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228680 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener-log" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228689 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener-log" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228702 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbd4e9d-9635-462d-abba-763daf0da369" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228710 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbd4e9d-9635-462d-abba-763daf0da369" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228724 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerName="dnsmasq-dns" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228731 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerName="dnsmasq-dns" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228747 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="ovsdbserver-nb" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228754 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="ovsdbserver-nb" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228764 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228770 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228788 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8983d0-bcff-45de-b158-351e12a0b0f3" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228797 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8983d0-bcff-45de-b158-351e12a0b0f3" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228822 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228831 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228844 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="cinder-scheduler" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228852 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="cinder-scheduler" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228863 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerName="galera" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228871 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerName="galera" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228881 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerName="init" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228888 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerName="init" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228907 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228914 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228928 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="ovsdbserver-sb" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228935 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="ovsdbserver-sb" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228948 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228954 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.228969 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker-log" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.228977 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker-log" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.229009 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-server" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229016 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-server" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.229028 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229035 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.229061 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerName="mysql-bootstrap" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229069 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerName="mysql-bootstrap" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229327 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="probe" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229346 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="ovsdbserver-sb" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229355 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" containerName="galera" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229365 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="ovsdbserver-nb" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229376 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229384 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" containerName="cinder-scheduler" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229396 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" containerName="ovn-controller" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229411 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-server" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229426 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker-log" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229436 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229445 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener-log" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229462 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="464d950a-e1bb-4efb-afdf-37b97a62a42c" containerName="dnsmasq-dns" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229470 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" containerName="barbican-worker" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229479 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" containerName="barbican-keystone-listener" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229488 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" containerName="proxy-httpd" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229499 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8983d0-bcff-45de-b158-351e12a0b0f3" containerName="openstack-network-exporter" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.229513 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbd4e9d-9635-462d-abba-763daf0da369" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.230264 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.239401 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.254246 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cron-29396821-s9nxs"] Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.264208 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.265915 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.286104 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cfdd58ff7-mgd8m"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.286351 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-cfdd58ff7-mgd8m" podUID="020f49e7-c73f-460c-a068-75051e73cf90" containerName="keystone-api" containerID="cri-o://8e6b764b39cbb94f171e4a3905646fc7d88ee4093584bde74bbc1f1676a19df8" gracePeriod=30 Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.288134 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.288182 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="a0a147b4-4445-4f7b-b22f-97db02340306" containerName="nova-cell1-conductor-conductor" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.298395 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdqvq\" (UniqueName: \"kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq\") pod \"keystone7a1c-account-delete-b8xp2\" (UID: \"3d9c0ba3-ac92-4821-acb3-fd40e750bdae\") " pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.298530 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.336363 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.349540 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cron-29396821-s9nxs"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.359448 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone7a1c-account-delete-b8xp2"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.377125 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.388454 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6876658948-bzr5z" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:33140->10.217.0.165:9311: read: connection reset by peer" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.388700 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6876658948-bzr5z" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:33148->10.217.0.165:9311: read: connection reset by peer" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.388784 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fphqf"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.399037 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fphqf"] Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.401669 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqvq\" (UniqueName: \"kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq\") pod \"keystone7a1c-account-delete-b8xp2\" (UID: \"3d9c0ba3-ac92-4821-acb3-fd40e750bdae\") " pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.402586 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.409935 4772 projected.go:194] Error preparing data for projected volume kube-api-access-xdqvq for pod openstack/keystone7a1c-account-delete-b8xp2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.410113 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq podName:3d9c0ba3-ac92-4821-acb3-fd40e750bdae nodeName:}" failed. No retries permitted until 2025-11-22 11:03:21.910090634 +0000 UTC m=+1522.149535139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xdqvq" (UniqueName: "kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq") pod "keystone7a1c-account-delete-b8xp2" (UID: "3d9c0ba3-ac92-4821-acb3-fd40e750bdae") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.444867 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a4e681ba-088a-41b1-9b89-8bac928038e5" (UID: "a4e681ba-088a-41b1-9b89-8bac928038e5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.471443 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0e82a7-4323-4948-9e3c-3dc8a9df0c87" path="/var/lib/kubelet/pods/0a0e82a7-4323-4948-9e3c-3dc8a9df0c87/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.479509 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107a304c-250a-4df8-8a5e-5ffc7449cdc6" path="/var/lib/kubelet/pods/107a304c-250a-4df8-8a5e-5ffc7449cdc6/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.484083 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8983d0-bcff-45de-b158-351e12a0b0f3" path="/var/lib/kubelet/pods/6f8983d0-bcff-45de-b158-351e12a0b0f3/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.486635 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736d298e-3dc7-460e-a12e-bb29c4364e85" path="/var/lib/kubelet/pods/736d298e-3dc7-460e-a12e-bb29c4364e85/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.497800 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866e2068-dd82-486f-b191-c39d54a86533" path="/var/lib/kubelet/pods/866e2068-dd82-486f-b191-c39d54a86533/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.515296 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.517466 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.519264 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae389e1d-b9a4-4ba1-a746-023c65c68e15" path="/var/lib/kubelet/pods/ae389e1d-b9a4-4ba1-a746-023c65c68e15/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.520654 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b998bcc0-7358-4f93-9584-b1b99829108f" path="/var/lib/kubelet/pods/b998bcc0-7358-4f93-9584-b1b99829108f/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.524893 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2" path="/var/lib/kubelet/pods/bbef192e-2ee8-4ff4-aad2-4eb3f61e39e2/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.525757 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e8aace-e7f4-4ca7-902b-c20672965240" path="/var/lib/kubelet/pods/f9e8aace-e7f4-4ca7-902b-c20672965240/volumes" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.527953 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.163:8776/healthcheck\": dial tcp 10.217.0.163:8776: connect: connection refused" Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.566835 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9 is running failed: container process not found" containerID="08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.567468 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9 is running failed: container process not found" containerID="08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.569512 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9 is running failed: container process not found" containerID="08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.569566 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="33b26633-94ac-4439-b1ab-ab225d2e562b" containerName="nova-cell0-conductor-conductor" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.617815 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fbd4e9d-9635-462d-abba-763daf0da369" (UID: "5fbd4e9d-9635-462d-abba-763daf0da369"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.618957 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.618984 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.745937 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.838455 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.889096 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c994b4f-e182-481a-a3ba-17dc9656c70c" (UID: "1c994b4f-e182-481a-a3ba-17dc9656c70c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.892654 4772 generic.go:334] "Generic (PLEG): container finished" podID="56fb678a-814f-4328-8b49-9226512bf10e" containerID="87ddd492904d37b209c29a1ac6b15eb1b0ea478fe5009f7ccf2abbf8a98a52e8" exitCode=0 Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.903249 4772 generic.go:334] "Generic (PLEG): container finished" podID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerID="a687188010e7d1b6b6e71ce02eb4abc2bad75aaad585c817273d9a77d8fbf014" exitCode=0 Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.937580 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerID="10ca89615b0daee19e2ff35e6822a5cb0d8fad7528fc2fae6ffe6155cd72db74" exitCode=0 Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.939535 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqvq\" (UniqueName: \"kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq\") pod \"keystone7a1c-account-delete-b8xp2\" (UID: \"3d9c0ba3-ac92-4821-acb3-fd40e750bdae\") " pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.939679 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.941679 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="c9a852d5-2258-45b4-9076-95740059eecd" containerName="galera" containerID="cri-o://fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" gracePeriod=30 Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.943132 4772 projected.go:194] Error preparing data for projected volume kube-api-access-xdqvq for pod openstack/keystone7a1c-account-delete-b8xp2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:21 crc kubenswrapper[4772]: E1122 11:03:21.943204 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq podName:3d9c0ba3-ac92-4821-acb3-fd40e750bdae nodeName:}" failed. No retries permitted until 2025-11-22 11:03:22.943183977 +0000 UTC m=+1523.182628471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xdqvq" (UniqueName: "kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq") pod "keystone7a1c-account-delete-b8xp2" (UID: "3d9c0ba3-ac92-4821-acb3-fd40e750bdae") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.960340 4772 generic.go:334] "Generic (PLEG): container finished" podID="30007403-085b-4874-88b7-8b27426fd4f7" containerID="d2ca6179053090103083ab2df5c267a331e4c9cb421a6856a8ede33c2154df6c" exitCode=1 Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.972115 4772 generic.go:334] "Generic (PLEG): container finished" podID="941a38a8-56e0-4061-8891-0cd3815477a4" containerID="13ecba734c345ef3d469fb45e441d0e233378d129f7ea9238f342cb8ddae536e" exitCode=1 Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.986305 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a4e681ba-088a-41b1-9b89-8bac928038e5" (UID: "a4e681ba-088a-41b1-9b89-8bac928038e5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.987699 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:21 crc kubenswrapper[4772]: I1122 11:03:21.998928 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data" (OuterVolumeSpecName: "config-data") pod "027dc32b-06dd-45bf-9aad-8e0c92b44a2b" (UID: "027dc32b-06dd-45bf-9aad-8e0c92b44a2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.016809 4772 generic.go:334] "Generic (PLEG): container finished" podID="9aeb3608-353b-4b44-8797-46affdc587a7" containerID="3d173607bb5dc318429a013351d1676c6d42fb4927a52f88588e3167331f4341" exitCode=1 Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.038885 4772 generic.go:334] "Generic (PLEG): container finished" podID="706b8e5a-87b8-429e-aea7-e7e5f161182f" containerID="b77b969953fe9c8048b8ff277fe2329156f5363070399e9e09c0f7223bcb8d6e" exitCode=1 Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.041574 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.041603 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.041615 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.082233 4772 generic.go:334] "Generic (PLEG): container finished" podID="a0a147b4-4445-4f7b-b22f-97db02340306" containerID="75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e" exitCode=0 Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.104552 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "027dc32b-06dd-45bf-9aad-8e0c92b44a2b" (UID: "027dc32b-06dd-45bf-9aad-8e0c92b44a2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.120677 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-config-data" (OuterVolumeSpecName: "config-data") pod "5fbd4e9d-9635-462d-abba-763daf0da369" (UID: "5fbd4e9d-9635-462d-abba-763daf0da369"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.140660 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.143731 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.143757 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.143766 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027dc32b-06dd-45bf-9aad-8e0c92b44a2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.147560 4772 generic.go:334] "Generic (PLEG): container finished" podID="e723031c-0772-49f7-ba16-f635ddd53dcc" containerID="e28b401e3c853589d7a264d1dc93faf87588bdab270334394035a96b16630d72" exitCode=1 Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.157705 4772 generic.go:334] "Generic (PLEG): container finished" podID="cb612c10-4436-4c79-b990-cbc7b403eed5" containerID="fdf7bdb90ea53df33a248c12c99b072c760a9ac7d7af309e4e196c43accf6f02" exitCode=1 Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.158496 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.161456 4772 generic.go:334] "Generic (PLEG): container finished" podID="7d122410-121a-47cd-9465-e5c6f85cf2b2" containerID="9b4a372c6969edd5b4fcbcfc5857268afac7433a5a8b32510bbb740f15100717" exitCode=1 Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.175278 4772 generic.go:334] "Generic (PLEG): container finished" podID="86139aa9-cd30-4d97-833e-a26562aebf92" containerID="a06360ee3022a654f156c3386f22cd5fd488251afc8543f8c37cbc65fc693984" exitCode=0 Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.209359 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4e681ba-088a-41b1-9b89-8bac928038e5" (UID: "a4e681ba-088a-41b1-9b89-8bac928038e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.244279 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45d574ce-36bc-461c-a85a-738b71392ed6" (UID: "45d574ce-36bc-461c-a85a-738b71392ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.245499 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.245514 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e681ba-088a-41b1-9b89-8bac928038e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.245523 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.297174 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.328585 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.331252 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "5fbd4e9d-9635-462d-abba-763daf0da369" (UID: "5fbd4e9d-9635-462d-abba-763daf0da369"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.344797 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "5fbd4e9d-9635-462d-abba-763daf0da369" (UID: "5fbd4e9d-9635-462d-abba-763daf0da369"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.347672 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.347828 4772 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.347846 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.347857 4772 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbd4e9d-9635-462d-abba-763daf0da369-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.348986 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.367290 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data" (OuterVolumeSpecName: "config-data") pod "1c994b4f-e182-481a-a3ba-17dc9656c70c" (UID: "1c994b4f-e182-481a-a3ba-17dc9656c70c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.395130 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" (UID: "4ce40448-07b1-492e-bb7c-48aaf2bb3ce9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.396681 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "c4828519-a6ad-4851-b9c2-134a12f373ac" (UID: "c4828519-a6ad-4851-b9c2-134a12f373ac"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.398908 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data" (OuterVolumeSpecName: "config-data") pod "45d574ce-36bc-461c-a85a-738b71392ed6" (UID: "45d574ce-36bc-461c-a85a-738b71392ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.399561 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-config-data" (OuterVolumeSpecName: "config-data") pod "13c1f859-42ed-484f-88cb-5349a7b64dda" (UID: "13c1f859-42ed-484f-88cb-5349a7b64dda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.446296 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "62770fd6-1000-4477-ac95-7a4eaa489732" (UID: "62770fd6-1000-4477-ac95-7a4eaa489732"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.452817 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.452850 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13c1f859-42ed-484f-88cb-5349a7b64dda-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.452861 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c994b4f-e182-481a-a3ba-17dc9656c70c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.452873 4772 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4828519-a6ad-4851-b9c2-134a12f373ac-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.452884 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d574ce-36bc-461c-a85a-738b71392ed6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.452895 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62770fd6-1000-4477-ac95-7a4eaa489732-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.452908 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:22 crc kubenswrapper[4772]: E1122 11:03:22.865300 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:22 crc kubenswrapper[4772]: E1122 11:03:22.865379 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data podName:5ce19f6b-73e1-48b9-810a-f9d97a14fe7b nodeName:}" failed. No retries permitted until 2025-11-22 11:03:30.865363091 +0000 UTC m=+1531.104807585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data") pod "rabbitmq-cell1-server-0" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b") : configmap "rabbitmq-cell1-config-data" not found Nov 22 11:03:22 crc kubenswrapper[4772]: I1122 11:03:22.967297 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqvq\" (UniqueName: \"kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq\") pod \"keystone7a1c-account-delete-b8xp2\" (UID: \"3d9c0ba3-ac92-4821-acb3-fd40e750bdae\") " pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:22 crc kubenswrapper[4772]: E1122 11:03:22.971192 4772 projected.go:194] Error preparing data for projected volume kube-api-access-xdqvq for pod openstack/keystone7a1c-account-delete-b8xp2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:22 crc kubenswrapper[4772]: E1122 11:03:22.971283 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq podName:3d9c0ba3-ac92-4821-acb3-fd40e750bdae nodeName:}" failed. No retries permitted until 2025-11-22 11:03:24.971258789 +0000 UTC m=+1525.210703353 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xdqvq" (UniqueName: "kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq") pod "keystone7a1c-account-delete-b8xp2" (UID: "3d9c0ba3-ac92-4821-acb3-fd40e750bdae") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.194487 4772 generic.go:334] "Generic (PLEG): container finished" podID="3fbd9ebd-2c62-4336-9946-792e4b3c83db" containerID="98e7f008c22226185b9fc4da3d836b571692bf43deee9c747a6c68730a3601a5" exitCode=0 Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.231370 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerID="f9a0eee7dffa91a6e45f6416ea3a94139da8a50ce7e232bc83c7eb4c3723250c" exitCode=0 Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.469786 4772 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.057s" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.469823 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56fb678a-814f-4328-8b49-9226512bf10e","Type":"ContainerDied","Data":"87ddd492904d37b209c29a1ac6b15eb1b0ea478fe5009f7ccf2abbf8a98a52e8"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.469883 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7a1c-account-create-x6bkv"] Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.469902 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17f0d5ca-99e5-47c6-9fdf-1932956cff3e","Type":"ContainerDied","Data":"a687188010e7d1b6b6e71ce02eb4abc2bad75aaad585c817273d9a77d8fbf014"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.485415 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e681ba-088a-41b1-9b89-8bac928038e5" path="/var/lib/kubelet/pods/a4e681ba-088a-41b1-9b89-8bac928038e5/volumes" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487488 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone7a1c-account-delete-b8xp2"] Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487532 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50","Type":"ContainerDied","Data":"0908d1f79b72482026c58a702cd21bf200bf6a3eb6efd4c6e210385b1b85d1fc"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487559 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0908d1f79b72482026c58a702cd21bf200bf6a3eb6efd4c6e210385b1b85d1fc" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487574 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b92f55-36d8-4358-9b57-734762f225c4","Type":"ContainerDied","Data":"764b6bb6a5ee39e7047bd726ab180a64aa79251d5579b833c4a5dceea602f6f4"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487587 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="764b6bb6a5ee39e7047bd726ab180a64aa79251d5579b833c4a5dceea602f6f4" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerDied","Data":"10ca89615b0daee19e2ff35e6822a5cb0d8fad7528fc2fae6ffe6155cd72db74"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487661 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd786c776-rmj8k" event={"ID":"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7","Type":"ContainerDied","Data":"c4b8d10e4960a23d8266b692f1975f266a727e1c5d93dba9ff2f57317ee82d59"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487675 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b8d10e4960a23d8266b692f1975f266a727e1c5d93dba9ff2f57317ee82d59" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487683 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder2d34-account-delete-7qhqb" event={"ID":"30007403-085b-4874-88b7-8b27426fd4f7","Type":"ContainerDied","Data":"d2ca6179053090103083ab2df5c267a331e4c9cb421a6856a8ede33c2154df6c"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487700 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7a1c-account-create-x6bkv"] Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487736 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron45e9-account-delete-9pn28" event={"ID":"941a38a8-56e0-4061-8891-0cd3815477a4","Type":"ContainerDied","Data":"13ecba734c345ef3d469fb45e441d0e233378d129f7ea9238f342cb8ddae536e"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487752 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi0c20-account-delete-g5lnb" event={"ID":"9aeb3608-353b-4b44-8797-46affdc587a7","Type":"ContainerDied","Data":"3d173607bb5dc318429a013351d1676c6d42fb4927a52f88588e3167331f4341"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487773 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancee71c-account-delete-64r2z" event={"ID":"706b8e5a-87b8-429e-aea7-e7e5f161182f","Type":"ContainerDied","Data":"b77b969953fe9c8048b8ff277fe2329156f5363070399e9e09c0f7223bcb8d6e"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487787 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"33b26633-94ac-4439-b1ab-ab225d2e562b","Type":"ContainerDied","Data":"cdefe172ea5ef24ff2edbeb32a6b1a6f62488c1e44102ccfed8a9bd20edbd16e"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487798 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdefe172ea5ef24ff2edbeb32a6b1a6f62488c1e44102ccfed8a9bd20edbd16e" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487812 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a0a147b4-4445-4f7b-b22f-97db02340306","Type":"ContainerDied","Data":"75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487824 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"14ed2945-ef18-49de-9c18-679e011d3df5","Type":"ContainerDied","Data":"1c3f5bced2f15e83de4da4eba1c5544977260d6d5e2b4f4ca22c417c52946598"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487838 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3f5bced2f15e83de4da4eba1c5544977260d6d5e2b4f4ca22c417c52946598" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487858 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddb4a824-3a8a-4287-b206-94832099e15b","Type":"ContainerDied","Data":"9397793039e1f460d83c1c72ed6be565eb164d6fc94e599ab426a93c8cb5de19"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487872 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9397793039e1f460d83c1c72ed6be565eb164d6fc94e599ab426a93c8cb5de19" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487889 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement8972-account-delete-pqznd" event={"ID":"e723031c-0772-49f7-ba16-f635ddd53dcc","Type":"ContainerDied","Data":"e28b401e3c853589d7a264d1dc93faf87588bdab270334394035a96b16630d72"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487902 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican3436-account-delete-4w4qv" event={"ID":"cb612c10-4436-4c79-b990-cbc7b403eed5","Type":"ContainerDied","Data":"fdf7bdb90ea53df33a248c12c99b072c760a9ac7d7af309e4e196c43accf6f02"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487918 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05903-account-delete-c7857" event={"ID":"7d122410-121a-47cd-9465-e5c6f85cf2b2","Type":"ContainerDied","Data":"9b4a372c6969edd5b4fcbcfc5857268afac7433a5a8b32510bbb740f15100717"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487948 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6876658948-bzr5z" event={"ID":"86139aa9-cd30-4d97-833e-a26562aebf92","Type":"ContainerDied","Data":"a06360ee3022a654f156c3386f22cd5fd488251afc8543f8c37cbc65fc693984"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487963 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6876658948-bzr5z" event={"ID":"86139aa9-cd30-4d97-833e-a26562aebf92","Type":"ContainerDied","Data":"3c987b57d62e3696c05299c2a6bb0caee472f688983f0e5a21b4159de13a7513"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487973 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c987b57d62e3696c05299c2a6bb0caee472f688983f0e5a21b4159de13a7513" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487984 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi0c20-account-delete-g5lnb" event={"ID":"9aeb3608-353b-4b44-8797-46affdc587a7","Type":"ContainerDied","Data":"25d10aa38ba618bf1da0a67f0b55cf1c1dd9aae8cfe0a5a83b1c79f39efb8e3a"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.487996 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d10aa38ba618bf1da0a67f0b55cf1c1dd9aae8cfe0a5a83b1c79f39efb8e3a" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488005 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56fb678a-814f-4328-8b49-9226512bf10e","Type":"ContainerDied","Data":"788db6820dc60e8f870454dad1039f3b9cb38f35945534a401b7ac384d1fc2d5"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488015 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788db6820dc60e8f870454dad1039f3b9cb38f35945534a401b7ac384d1fc2d5" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488025 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3fbd9ebd-2c62-4336-9946-792e4b3c83db","Type":"ContainerDied","Data":"98e7f008c22226185b9fc4da3d836b571692bf43deee9c747a6c68730a3601a5"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488039 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glancee71c-account-delete-64r2z" event={"ID":"706b8e5a-87b8-429e-aea7-e7e5f161182f","Type":"ContainerDied","Data":"a4f65283c36fd34461435fb05fb618636388babba3099fd1c115d101d9ef2ccb"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488425 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4f65283c36fd34461435fb05fb618636388babba3099fd1c115d101d9ef2ccb" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488436 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05903-account-delete-c7857" event={"ID":"7d122410-121a-47cd-9465-e5c6f85cf2b2","Type":"ContainerDied","Data":"5ef68a720c9d2658b5c899583f8c1082387b678b82271a393a16dadff1cd9d3c"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488445 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ef68a720c9d2658b5c899583f8c1082387b678b82271a393a16dadff1cd9d3c" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488454 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder2d34-account-delete-7qhqb" event={"ID":"30007403-085b-4874-88b7-8b27426fd4f7","Type":"ContainerDied","Data":"389d71be2ab45daef9f146b117f3ae303e117cc7f3b03f21be0899adc4dc525c"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488461 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="389d71be2ab45daef9f146b117f3ae303e117cc7f3b03f21be0899adc4dc525c" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488469 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron45e9-account-delete-9pn28" event={"ID":"941a38a8-56e0-4061-8891-0cd3815477a4","Type":"ContainerDied","Data":"b71da73b4899febd24e2c5357d90a96d5107fcdd87b03943f84a1fc4a8d49d41"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488477 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b71da73b4899febd24e2c5357d90a96d5107fcdd87b03943f84a1fc4a8d49d41" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488485 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement8972-account-delete-pqznd" event={"ID":"e723031c-0772-49f7-ba16-f635ddd53dcc","Type":"ContainerDied","Data":"5c113279f0d6744fe689f5ad8c86ea24f34df31e0c63ed1f4d60ee8223914dfd"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488494 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c113279f0d6744fe689f5ad8c86ea24f34df31e0c63ed1f4d60ee8223914dfd" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488503 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"17f0d5ca-99e5-47c6-9fdf-1932956cff3e","Type":"ContainerDied","Data":"3d270d31d14fce4c62bab4b1f7f271368d062a7d95474934f968e68a8ba5539f"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488514 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d270d31d14fce4c62bab4b1f7f271368d062a7d95474934f968e68a8ba5539f" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488525 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican3436-account-delete-4w4qv" event={"ID":"cb612c10-4436-4c79-b990-cbc7b403eed5","Type":"ContainerDied","Data":"49263e5e29c7946d5bdeb6bfdf5992208357e642c0c3494147ca9a5754236ea4"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488534 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49263e5e29c7946d5bdeb6bfdf5992208357e642c0c3494147ca9a5754236ea4" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488542 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a0a147b4-4445-4f7b-b22f-97db02340306","Type":"ContainerDied","Data":"42fdc2a0bd3748a49952bb49944162680a11c90848fd8eff65c46bc287c5d970"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488549 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42fdc2a0bd3748a49952bb49944162680a11c90848fd8eff65c46bc287c5d970" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488557 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerDied","Data":"f9a0eee7dffa91a6e45f6416ea3a94139da8a50ce7e232bc83c7eb4c3723250c"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488567 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40","Type":"ContainerDied","Data":"cb05dc364fcba82f1cea9840e9418bf6120dd59be237c253b9dcfbd83c44aac8"} Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.488575 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb05dc364fcba82f1cea9840e9418bf6120dd59be237c253b9dcfbd83c44aac8" Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.793012 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.801941 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.810548 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.810643 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="51f59313-1e0d-4877-9141-c32a7f72f84f" containerName="nova-scheduler-scheduler" Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.899773 4772 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.900210 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data podName:468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea nodeName:}" failed. No retries permitted until 2025-11-22 11:03:31.900190953 +0000 UTC m=+1532.139635447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data") pod "rabbitmq-server-0" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea") : configmap "rabbitmq-config-data" not found Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.900911 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.944003 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.968811 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.969753 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.986725 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409 is running failed: container process not found" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.987401 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409 is running failed: container process not found" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.989258 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409 is running failed: container process not found" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 11:03:23 crc kubenswrapper[4772]: E1122 11:03:23.989291 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="ovn-northd" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.992290 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.996310 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 11:03:23 crc kubenswrapper[4772]: I1122 11:03:23.997528 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001628 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-internal-tls-certs\") pod \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001676 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-nova-metadata-tls-certs\") pod \"b8b92f55-36d8-4358-9b57-734762f225c4\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001721 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-scripts\") pod \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001754 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87khb\" (UniqueName: \"kubernetes.io/projected/ddb4a824-3a8a-4287-b206-94832099e15b-kube-api-access-87khb\") pod \"ddb4a824-3a8a-4287-b206-94832099e15b\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001781 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-public-tls-certs\") pod \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001805 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-config-data\") pod \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001850 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkczn\" (UniqueName: \"kubernetes.io/projected/b8b92f55-36d8-4358-9b57-734762f225c4-kube-api-access-pkczn\") pod \"b8b92f55-36d8-4358-9b57-734762f225c4\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001881 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b92f55-36d8-4358-9b57-734762f225c4-logs\") pod \"b8b92f55-36d8-4358-9b57-734762f225c4\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001898 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-combined-ca-bundle\") pod \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001923 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-certs\") pod \"ddb4a824-3a8a-4287-b206-94832099e15b\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.001983 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-logs\") pod \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.002001 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-combined-ca-bundle\") pod \"b8b92f55-36d8-4358-9b57-734762f225c4\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.002037 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-config-data\") pod \"b8b92f55-36d8-4358-9b57-734762f225c4\" (UID: \"b8b92f55-36d8-4358-9b57-734762f225c4\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.002081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-config\") pod \"ddb4a824-3a8a-4287-b206-94832099e15b\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.002107 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-combined-ca-bundle\") pod \"ddb4a824-3a8a-4287-b206-94832099e15b\" (UID: \"ddb4a824-3a8a-4287-b206-94832099e15b\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.002131 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqdwg\" (UniqueName: \"kubernetes.io/projected/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-kube-api-access-bqdwg\") pod \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\" (UID: \"ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.003894 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b92f55-36d8-4358-9b57-734762f225c4-logs" (OuterVolumeSpecName: "logs") pod "b8b92f55-36d8-4358-9b57-734762f225c4" (UID: "b8b92f55-36d8-4358-9b57-734762f225c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.008913 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-scripts" (OuterVolumeSpecName: "scripts") pod "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" (UID: "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.009804 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb4a824-3a8a-4287-b206-94832099e15b-kube-api-access-87khb" (OuterVolumeSpecName: "kube-api-access-87khb") pod "ddb4a824-3a8a-4287-b206-94832099e15b" (UID: "ddb4a824-3a8a-4287-b206-94832099e15b"). InnerVolumeSpecName "kube-api-access-87khb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.010816 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-logs" (OuterVolumeSpecName: "logs") pod "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" (UID: "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.012187 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b92f55-36d8-4358-9b57-734762f225c4-kube-api-access-pkczn" (OuterVolumeSpecName: "kube-api-access-pkczn") pod "b8b92f55-36d8-4358-9b57-734762f225c4" (UID: "b8b92f55-36d8-4358-9b57-734762f225c4"). InnerVolumeSpecName "kube-api-access-pkczn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.025884 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-kube-api-access-bqdwg" (OuterVolumeSpecName: "kube-api-access-bqdwg") pod "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" (UID: "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7"). InnerVolumeSpecName "kube-api-access-bqdwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.028865 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-57b6cb6667-w95sj"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.035440 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-57b6cb6667-w95sj"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.043181 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "ddb4a824-3a8a-4287-b206-94832099e15b" (UID: "ddb4a824-3a8a-4287-b206-94832099e15b"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.055420 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-79459755b6-xsvzh"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.079709 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-79459755b6-xsvzh"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.097162 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.103705 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-public-tls-certs\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.103797 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9nvs\" (UniqueName: \"kubernetes.io/projected/33b26633-94ac-4439-b1ab-ab225d2e562b-kube-api-access-g9nvs\") pod \"33b26633-94ac-4439-b1ab-ab225d2e562b\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.103903 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-scripts\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.103946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-config-data\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.109184 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-httpd-run\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.109279 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-logs\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.109356 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-config-data\") pod \"33b26633-94ac-4439-b1ab-ab225d2e562b\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.109389 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-combined-ca-bundle\") pod \"33b26633-94ac-4439-b1ab-ab225d2e562b\" (UID: \"33b26633-94ac-4439-b1ab-ab225d2e562b\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.109438 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdt7g\" (UniqueName: \"kubernetes.io/projected/14ed2945-ef18-49de-9c18-679e011d3df5-kube-api-access-cdt7g\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.109468 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-combined-ca-bundle\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.109521 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"14ed2945-ef18-49de-9c18-679e011d3df5\" (UID: \"14ed2945-ef18-49de-9c18-679e011d3df5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110253 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b92f55-36d8-4358-9b57-734762f225c4-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110277 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110289 4772 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110303 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqdwg\" (UniqueName: \"kubernetes.io/projected/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-kube-api-access-bqdwg\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110314 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110325 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87khb\" (UniqueName: \"kubernetes.io/projected/ddb4a824-3a8a-4287-b206-94832099e15b-kube-api-access-87khb\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110336 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkczn\" (UniqueName: \"kubernetes.io/projected/b8b92f55-36d8-4358-9b57-734762f225c4-kube-api-access-pkczn\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110855 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.110896 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5f797948bc-dk5pr"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.111303 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.118965 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-logs" (OuterVolumeSpecName: "logs") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.121451 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.123129 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ed2945-ef18-49de-9c18-679e011d3df5-kube-api-access-cdt7g" (OuterVolumeSpecName: "kube-api-access-cdt7g") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "kube-api-access-cdt7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.128089 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5f797948bc-dk5pr"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.138022 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" (UID: "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.145726 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b26633-94ac-4439-b1ab-ab225d2e562b-kube-api-access-g9nvs" (OuterVolumeSpecName: "kube-api-access-g9nvs") pod "33b26633-94ac-4439-b1ab-ab225d2e562b" (UID: "33b26633-94ac-4439-b1ab-ab225d2e562b"). InnerVolumeSpecName "kube-api-access-g9nvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.149223 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-scripts" (OuterVolumeSpecName: "scripts") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.172366 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-config-data" (OuterVolumeSpecName: "config-data") pod "b8b92f55-36d8-4358-9b57-734762f225c4" (UID: "b8b92f55-36d8-4358-9b57-734762f225c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.184850 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-7f4sv"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.192839 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-7f4sv"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.206079 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "ddb4a824-3a8a-4287-b206-94832099e15b" (UID: "ddb4a824-3a8a-4287-b206-94832099e15b"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.213712 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.215817 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217819 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217839 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217851 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ed2945-ef18-49de-9c18-679e011d3df5-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217860 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217873 4772 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217887 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdt7g\" (UniqueName: \"kubernetes.io/projected/14ed2945-ef18-49de-9c18-679e011d3df5-kube-api-access-cdt7g\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217915 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217925 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.217933 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9nvs\" (UniqueName: \"kubernetes.io/projected/33b26633-94ac-4439-b1ab-ab225d2e562b-kube-api-access-g9nvs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.222752 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddb4a824-3a8a-4287-b206-94832099e15b" (UID: "ddb4a824-3a8a-4287-b206-94832099e15b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.228221 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.229306 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.248822 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4f74827a-8354-492b-b09d-350768ba912d/ovn-northd/0.log" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.249241 4772 generic.go:334] "Generic (PLEG): container finished" podID="4f74827a-8354-492b-b09d-350768ba912d" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" exitCode=139 Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.249396 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4f74827a-8354-492b-b09d-350768ba912d","Type":"ContainerDied","Data":"bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409"} Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.249669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4f74827a-8354-492b-b09d-350768ba912d","Type":"ContainerDied","Data":"af7b8a2cc9f093f027c6ab19ddc6451bc602844205a3f0d9d50640b5c2990e6a"} Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.249843 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af7b8a2cc9f093f027c6ab19ddc6451bc602844205a3f0d9d50640b5c2990e6a" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.260123 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.260262 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.260587 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3fbd9ebd-2c62-4336-9946-792e4b3c83db","Type":"ContainerDied","Data":"61f7b5d99d2452de6e4d8656605e6960b108cae6255016243acdf0a52cab0bb3"} Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.260700 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f7b5d99d2452de6e4d8656605e6960b108cae6255016243acdf0a52cab0bb3" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.260814 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.261306 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd786c776-rmj8k" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.261543 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.275302 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8b92f55-36d8-4358-9b57-734762f225c4" (UID: "b8b92f55-36d8-4358-9b57-734762f225c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.287214 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33b26633-94ac-4439-b1ab-ab225d2e562b" (UID: "33b26633-94ac-4439-b1ab-ab225d2e562b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.295040 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-config-data" (OuterVolumeSpecName: "config-data") pod "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" (UID: "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.295419 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.304211 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-config-data" (OuterVolumeSpecName: "config-data") pod "33b26633-94ac-4439-b1ab-ab225d2e562b" (UID: "33b26633-94ac-4439-b1ab-ab225d2e562b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.306271 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.307155 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b8b92f55-36d8-4358-9b57-734762f225c4" (UID: "b8b92f55-36d8-4358-9b57-734762f225c4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.312024 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.319925 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.319971 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.319982 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b26633-94ac-4439-b1ab-ab225d2e562b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.320009 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.320018 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.320027 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.320035 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.320065 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb4a824-3a8a-4287-b206-94832099e15b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.320076 4772 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8b92f55-36d8-4358-9b57-734762f225c4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.347308 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-config-data" (OuterVolumeSpecName: "config-data") pod "14ed2945-ef18-49de-9c18-679e011d3df5" (UID: "14ed2945-ef18-49de-9c18-679e011d3df5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.364443 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" (UID: "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.379005 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" (UID: "ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.396970 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.422196 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.422966 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-combined-ca-bundle\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423024 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-httpd-run\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423170 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-scripts\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423211 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-internal-tls-certs\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423230 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-config-data\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423272 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prsb8\" (UniqueName: \"kubernetes.io/projected/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-kube-api-access-prsb8\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423320 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-logs\") pod \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\" (UID: \"93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423723 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423739 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ed2945-ef18-49de-9c18-679e011d3df5-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.423749 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.424070 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.424682 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-logs" (OuterVolumeSpecName: "logs") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.426282 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.426717 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.430290 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.431612 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-scripts" (OuterVolumeSpecName: "scripts") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.431647 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.431684 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.439332 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.439419 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.441234 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.445522 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-kube-api-access-prsb8" (OuterVolumeSpecName: "kube-api-access-prsb8") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "kube-api-access-prsb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.446741 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: E1122 11:03:24.455816 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-xdqvq], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone7a1c-account-delete-b8xp2" podUID="3d9c0ba3-ac92-4821-acb3-fd40e750bdae" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.515611 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.526881 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.529770 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-combined-ca-bundle\") pod \"56fb678a-814f-4328-8b49-9226512bf10e\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.529844 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fb678a-814f-4328-8b49-9226512bf10e-logs\") pod \"56fb678a-814f-4328-8b49-9226512bf10e\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.529901 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-public-tls-certs\") pod \"56fb678a-814f-4328-8b49-9226512bf10e\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.529942 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-config-data\") pod \"56fb678a-814f-4328-8b49-9226512bf10e\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.530066 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c6dt\" (UniqueName: \"kubernetes.io/projected/56fb678a-814f-4328-8b49-9226512bf10e-kube-api-access-2c6dt\") pod \"56fb678a-814f-4328-8b49-9226512bf10e\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.530121 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-internal-tls-certs\") pod \"56fb678a-814f-4328-8b49-9226512bf10e\" (UID: \"56fb678a-814f-4328-8b49-9226512bf10e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.531016 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56fb678a-814f-4328-8b49-9226512bf10e-logs" (OuterVolumeSpecName: "logs") pod "56fb678a-814f-4328-8b49-9226512bf10e" (UID: "56fb678a-814f-4328-8b49-9226512bf10e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.546163 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.555735 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.555768 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prsb8\" (UniqueName: \"kubernetes.io/projected/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-kube-api-access-prsb8\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.555779 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.651191 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.651243 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.651282 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56fb678a-814f-4328-8b49-9226512bf10e-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.592452 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3436-account-delete-4w4qv" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.561858 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.570380 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-config-data" (OuterVolumeSpecName: "config-data") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.578786 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fb678a-814f-4328-8b49-9226512bf10e-kube-api-access-2c6dt" (OuterVolumeSpecName: "kube-api-access-2c6dt") pod "56fb678a-814f-4328-8b49-9226512bf10e" (UID: "56fb678a-814f-4328-8b49-9226512bf10e"). InnerVolumeSpecName "kube-api-access-2c6dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.579406 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi0c20-account-delete-g5lnb" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.671648 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.588218 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement8972-account-delete-pqznd" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.681165 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glancee71c-account-delete-64r2z" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.695340 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" (UID: "93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.699563 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder2d34-account-delete-7qhqb" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.700143 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.709854 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56fb678a-814f-4328-8b49-9226512bf10e" (UID: "56fb678a-814f-4328-8b49-9226512bf10e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.716209 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron45e9-account-delete-9pn28" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.743374 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.745323 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05903-account-delete-c7857" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754170 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data\") pod \"86139aa9-cd30-4d97-833e-a26562aebf92\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754256 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt5xr\" (UniqueName: \"kubernetes.io/projected/86139aa9-cd30-4d97-833e-a26562aebf92-kube-api-access-zt5xr\") pod \"86139aa9-cd30-4d97-833e-a26562aebf92\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754287 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7kqn\" (UniqueName: \"kubernetes.io/projected/706b8e5a-87b8-429e-aea7-e7e5f161182f-kube-api-access-b7kqn\") pod \"706b8e5a-87b8-429e-aea7-e7e5f161182f\" (UID: \"706b8e5a-87b8-429e-aea7-e7e5f161182f\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754306 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-internal-tls-certs\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754331 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data-custom\") pod \"86139aa9-cd30-4d97-833e-a26562aebf92\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754381 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-scripts\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754438 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data-custom\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754468 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-internal-tls-certs\") pod \"86139aa9-cd30-4d97-833e-a26562aebf92\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754497 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6plx\" (UniqueName: \"kubernetes.io/projected/e723031c-0772-49f7-ba16-f635ddd53dcc-kube-api-access-b6plx\") pod \"e723031c-0772-49f7-ba16-f635ddd53dcc\" (UID: \"e723031c-0772-49f7-ba16-f635ddd53dcc\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754518 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-config-data\") pod \"a0a147b4-4445-4f7b-b22f-97db02340306\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754640 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ght4g\" (UniqueName: \"kubernetes.io/projected/a0a147b4-4445-4f7b-b22f-97db02340306-kube-api-access-ght4g\") pod \"a0a147b4-4445-4f7b-b22f-97db02340306\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754758 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rdns\" (UniqueName: \"kubernetes.io/projected/cb612c10-4436-4c79-b990-cbc7b403eed5-kube-api-access-7rdns\") pod \"cb612c10-4436-4c79-b990-cbc7b403eed5\" (UID: \"cb612c10-4436-4c79-b990-cbc7b403eed5\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754812 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-public-tls-certs\") pod \"86139aa9-cd30-4d97-833e-a26562aebf92\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754856 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-logs\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.754902 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-combined-ca-bundle\") pod \"86139aa9-cd30-4d97-833e-a26562aebf92\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755059 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdwh\" (UniqueName: \"kubernetes.io/projected/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-kube-api-access-6zdwh\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755100 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-combined-ca-bundle\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755129 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-combined-ca-bundle\") pod \"a0a147b4-4445-4f7b-b22f-97db02340306\" (UID: \"a0a147b4-4445-4f7b-b22f-97db02340306\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755352 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86139aa9-cd30-4d97-833e-a26562aebf92-logs\") pod \"86139aa9-cd30-4d97-833e-a26562aebf92\" (UID: \"86139aa9-cd30-4d97-833e-a26562aebf92\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755378 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-etc-machine-id\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755422 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755475 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-public-tls-certs\") pod \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\" (UID: \"17f0d5ca-99e5-47c6-9fdf-1932956cff3e\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.755509 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwcq6\" (UniqueName: \"kubernetes.io/projected/9aeb3608-353b-4b44-8797-46affdc587a7-kube-api-access-gwcq6\") pod \"9aeb3608-353b-4b44-8797-46affdc587a7\" (UID: \"9aeb3608-353b-4b44-8797-46affdc587a7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.756664 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.756685 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.756695 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c6dt\" (UniqueName: \"kubernetes.io/projected/56fb678a-814f-4328-8b49-9226512bf10e-kube-api-access-2c6dt\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.756710 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.756720 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.756729 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.759136 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-config-data" (OuterVolumeSpecName: "config-data") pod "56fb678a-814f-4328-8b49-9226512bf10e" (UID: "56fb678a-814f-4328-8b49-9226512bf10e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.781165 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.792495 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86139aa9-cd30-4d97-833e-a26562aebf92-logs" (OuterVolumeSpecName: "logs") pod "86139aa9-cd30-4d97-833e-a26562aebf92" (UID: "86139aa9-cd30-4d97-833e-a26562aebf92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.795984 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-logs" (OuterVolumeSpecName: "logs") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.796137 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706b8e5a-87b8-429e-aea7-e7e5f161182f-kube-api-access-b7kqn" (OuterVolumeSpecName: "kube-api-access-b7kqn") pod "706b8e5a-87b8-429e-aea7-e7e5f161182f" (UID: "706b8e5a-87b8-429e-aea7-e7e5f161182f"). InnerVolumeSpecName "kube-api-access-b7kqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.797629 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb612c10-4436-4c79-b990-cbc7b403eed5-kube-api-access-7rdns" (OuterVolumeSpecName: "kube-api-access-7rdns") pod "cb612c10-4436-4c79-b990-cbc7b403eed5" (UID: "cb612c10-4436-4c79-b990-cbc7b403eed5"). InnerVolumeSpecName "kube-api-access-7rdns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.798539 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.798886 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aeb3608-353b-4b44-8797-46affdc587a7-kube-api-access-gwcq6" (OuterVolumeSpecName: "kube-api-access-gwcq6") pod "9aeb3608-353b-4b44-8797-46affdc587a7" (UID: "9aeb3608-353b-4b44-8797-46affdc587a7"). InnerVolumeSpecName "kube-api-access-gwcq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.802028 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.802978 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86139aa9-cd30-4d97-833e-a26562aebf92" (UID: "86139aa9-cd30-4d97-833e-a26562aebf92"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.804931 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e723031c-0772-49f7-ba16-f635ddd53dcc-kube-api-access-b6plx" (OuterVolumeSpecName: "kube-api-access-b6plx") pod "e723031c-0772-49f7-ba16-f635ddd53dcc" (UID: "e723031c-0772-49f7-ba16-f635ddd53dcc"). InnerVolumeSpecName "kube-api-access-b6plx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.814819 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-scripts" (OuterVolumeSpecName: "scripts") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.815245 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4f74827a-8354-492b-b09d-350768ba912d/ovn-northd/0.log" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.815323 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.819377 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a147b4-4445-4f7b-b22f-97db02340306-kube-api-access-ght4g" (OuterVolumeSpecName: "kube-api-access-ght4g") pod "a0a147b4-4445-4f7b-b22f-97db02340306" (UID: "a0a147b4-4445-4f7b-b22f-97db02340306"). InnerVolumeSpecName "kube-api-access-ght4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.826299 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.826496 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.830332 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86139aa9-cd30-4d97-833e-a26562aebf92-kube-api-access-zt5xr" (OuterVolumeSpecName: "kube-api-access-zt5xr") pod "86139aa9-cd30-4d97-833e-a26562aebf92" (UID: "86139aa9-cd30-4d97-833e-a26562aebf92"). InnerVolumeSpecName "kube-api-access-zt5xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.834168 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "56fb678a-814f-4328-8b49-9226512bf10e" (UID: "56fb678a-814f-4328-8b49-9226512bf10e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.848872 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.860552 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5mrl\" (UniqueName: \"kubernetes.io/projected/30007403-085b-4874-88b7-8b27426fd4f7-kube-api-access-g5mrl\") pod \"30007403-085b-4874-88b7-8b27426fd4f7\" (UID: \"30007403-085b-4874-88b7-8b27426fd4f7\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.860697 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsd46\" (UniqueName: \"kubernetes.io/projected/941a38a8-56e0-4061-8891-0cd3815477a4-kube-api-access-vsd46\") pod \"941a38a8-56e0-4061-8891-0cd3815477a4\" (UID: \"941a38a8-56e0-4061-8891-0cd3815477a4\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.860888 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smb8k\" (UniqueName: \"kubernetes.io/projected/7d122410-121a-47cd-9465-e5c6f85cf2b2-kube-api-access-smb8k\") pod \"7d122410-121a-47cd-9465-e5c6f85cf2b2\" (UID: \"7d122410-121a-47cd-9465-e5c6f85cf2b2\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861437 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86139aa9-cd30-4d97-833e-a26562aebf92-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861453 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861463 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwcq6\" (UniqueName: \"kubernetes.io/projected/9aeb3608-353b-4b44-8797-46affdc587a7-kube-api-access-gwcq6\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861472 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt5xr\" (UniqueName: \"kubernetes.io/projected/86139aa9-cd30-4d97-833e-a26562aebf92-kube-api-access-zt5xr\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861483 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7kqn\" (UniqueName: \"kubernetes.io/projected/706b8e5a-87b8-429e-aea7-e7e5f161182f-kube-api-access-b7kqn\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861492 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861499 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861507 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861516 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6plx\" (UniqueName: \"kubernetes.io/projected/e723031c-0772-49f7-ba16-f635ddd53dcc-kube-api-access-b6plx\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861524 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ght4g\" (UniqueName: \"kubernetes.io/projected/a0a147b4-4445-4f7b-b22f-97db02340306-kube-api-access-ght4g\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861532 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861540 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rdns\" (UniqueName: \"kubernetes.io/projected/cb612c10-4436-4c79-b990-cbc7b403eed5-kube-api-access-7rdns\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861547 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.861557 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-logs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.871939 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941a38a8-56e0-4061-8891-0cd3815477a4-kube-api-access-vsd46" (OuterVolumeSpecName: "kube-api-access-vsd46") pod "941a38a8-56e0-4061-8891-0cd3815477a4" (UID: "941a38a8-56e0-4061-8891-0cd3815477a4"). InnerVolumeSpecName "kube-api-access-vsd46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.876845 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-kube-api-access-6zdwh" (OuterVolumeSpecName: "kube-api-access-6zdwh") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "kube-api-access-6zdwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.886353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d122410-121a-47cd-9465-e5c6f85cf2b2-kube-api-access-smb8k" (OuterVolumeSpecName: "kube-api-access-smb8k") pod "7d122410-121a-47cd-9465-e5c6f85cf2b2" (UID: "7d122410-121a-47cd-9465-e5c6f85cf2b2"). InnerVolumeSpecName "kube-api-access-smb8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.914896 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.918534 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30007403-085b-4874-88b7-8b27426fd4f7-kube-api-access-g5mrl" (OuterVolumeSpecName: "kube-api-access-g5mrl") pod "30007403-085b-4874-88b7-8b27426fd4f7" (UID: "30007403-085b-4874-88b7-8b27426fd4f7"). InnerVolumeSpecName "kube-api-access-g5mrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.935006 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5cd786c776-rmj8k"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.943500 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5cd786c776-rmj8k"] Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962466 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-scripts\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962505 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-scripts\") pod \"4f74827a-8354-492b-b09d-350768ba912d\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962544 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kolla-config\") pod \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962581 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-combined-ca-bundle\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962598 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-combined-ca-bundle\") pod \"4f74827a-8354-492b-b09d-350768ba912d\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962635 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdh7\" (UniqueName: \"kubernetes.io/projected/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kube-api-access-wxdh7\") pod \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962677 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjcvn\" (UniqueName: \"kubernetes.io/projected/4f74827a-8354-492b-b09d-350768ba912d-kube-api-access-rjcvn\") pod \"4f74827a-8354-492b-b09d-350768ba912d\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962692 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdbgm\" (UniqueName: \"kubernetes.io/projected/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-kube-api-access-bdbgm\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962713 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-config-data\") pod \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962743 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-sg-core-conf-yaml\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962757 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-metrics-certs-tls-certs\") pod \"4f74827a-8354-492b-b09d-350768ba912d\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962790 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-config-data\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962859 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-log-httpd\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962887 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-ovn-northd-tls-certs\") pod \"4f74827a-8354-492b-b09d-350768ba912d\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962904 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-run-httpd\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962944 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-combined-ca-bundle\") pod \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962963 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-memcached-tls-certs\") pod \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\" (UID: \"3fbd9ebd-2c62-4336-9946-792e4b3c83db\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962980 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f74827a-8354-492b-b09d-350768ba912d-ovn-rundir\") pod \"4f74827a-8354-492b-b09d-350768ba912d\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.962998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-ceilometer-tls-certs\") pod \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\" (UID: \"2c6ce3fb-4529-4856-a326-bb0e9ea0ae40\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.963027 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-config\") pod \"4f74827a-8354-492b-b09d-350768ba912d\" (UID: \"4f74827a-8354-492b-b09d-350768ba912d\") " Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.963545 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smb8k\" (UniqueName: \"kubernetes.io/projected/7d122410-121a-47cd-9465-e5c6f85cf2b2-kube-api-access-smb8k\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.963568 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5mrl\" (UniqueName: \"kubernetes.io/projected/30007403-085b-4874-88b7-8b27426fd4f7-kube-api-access-g5mrl\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.963580 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdwh\" (UniqueName: \"kubernetes.io/projected/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-kube-api-access-6zdwh\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.963589 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.963598 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsd46\" (UniqueName: \"kubernetes.io/projected/941a38a8-56e0-4061-8891-0cd3815477a4-kube-api-access-vsd46\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.964204 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-config" (OuterVolumeSpecName: "config") pod "4f74827a-8354-492b-b09d-350768ba912d" (UID: "4f74827a-8354-492b-b09d-350768ba912d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.966867 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3fbd9ebd-2c62-4336-9946-792e4b3c83db" (UID: "3fbd9ebd-2c62-4336-9946-792e4b3c83db"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.967654 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-scripts" (OuterVolumeSpecName: "scripts") pod "4f74827a-8354-492b-b09d-350768ba912d" (UID: "4f74827a-8354-492b-b09d-350768ba912d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.970392 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f74827a-8354-492b-b09d-350768ba912d-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "4f74827a-8354-492b-b09d-350768ba912d" (UID: "4f74827a-8354-492b-b09d-350768ba912d"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.971110 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.971691 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-config-data" (OuterVolumeSpecName: "config-data") pod "3fbd9ebd-2c62-4336-9946-792e4b3c83db" (UID: "3fbd9ebd-2c62-4336-9946-792e4b3c83db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.972104 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.979682 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kube-api-access-wxdh7" (OuterVolumeSpecName: "kube-api-access-wxdh7") pod "3fbd9ebd-2c62-4336-9946-792e4b3c83db" (UID: "3fbd9ebd-2c62-4336-9946-792e4b3c83db"). InnerVolumeSpecName "kube-api-access-wxdh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:24 crc kubenswrapper[4772]: I1122 11:03:24.986558 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f74827a-8354-492b-b09d-350768ba912d-kube-api-access-rjcvn" (OuterVolumeSpecName: "kube-api-access-rjcvn") pod "4f74827a-8354-492b-b09d-350768ba912d" (UID: "4f74827a-8354-492b-b09d-350768ba912d"). InnerVolumeSpecName "kube-api-access-rjcvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.000230 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.011956 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.032193 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-scripts" (OuterVolumeSpecName: "scripts") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.032847 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-kube-api-access-bdbgm" (OuterVolumeSpecName: "kube-api-access-bdbgm") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "kube-api-access-bdbgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.040711 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "56fb678a-814f-4328-8b49-9226512bf10e" (UID: "56fb678a-814f-4328-8b49-9226512bf10e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.049283 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-config-data" (OuterVolumeSpecName: "config-data") pod "a0a147b4-4445-4f7b-b22f-97db02340306" (UID: "a0a147b4-4445-4f7b-b22f-97db02340306"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.068250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdqvq\" (UniqueName: \"kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq\") pod \"keystone7a1c-account-delete-b8xp2\" (UID: \"3d9c0ba3-ac92-4821-acb3-fd40e750bdae\") " pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.068632 4772 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.068741 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56fb678a-814f-4328-8b49-9226512bf10e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.068838 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdh7\" (UniqueName: \"kubernetes.io/projected/3fbd9ebd-2c62-4336-9946-792e4b3c83db-kube-api-access-wxdh7\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.068917 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjcvn\" (UniqueName: \"kubernetes.io/projected/4f74827a-8354-492b-b09d-350768ba912d-kube-api-access-rjcvn\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.068996 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdbgm\" (UniqueName: \"kubernetes.io/projected/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-kube-api-access-bdbgm\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069091 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fbd9ebd-2c62-4336-9946-792e4b3c83db-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069185 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069264 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069336 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069410 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f74827a-8354-492b-b09d-350768ba912d-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069480 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069549 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f74827a-8354-492b-b09d-350768ba912d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.069634 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: E1122 11:03:25.078750 4772 projected.go:194] Error preparing data for projected volume kube-api-access-xdqvq for pod openstack/keystone7a1c-account-delete-b8xp2: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:25 crc kubenswrapper[4772]: E1122 11:03:25.078839 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq podName:3d9c0ba3-ac92-4821-acb3-fd40e750bdae nodeName:}" failed. No retries permitted until 2025-11-22 11:03:29.078816391 +0000 UTC m=+1529.318260885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xdqvq" (UniqueName: "kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq") pod "keystone7a1c-account-delete-b8xp2" (UID: "3d9c0ba3-ac92-4821-acb3-fd40e750bdae") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.142135 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.142207 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.144773 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0a147b4-4445-4f7b-b22f-97db02340306" (UID: "a0a147b4-4445-4f7b-b22f-97db02340306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.148405 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.156636 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.192596 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a147b4-4445-4f7b-b22f-97db02340306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.257417 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fbd9ebd-2c62-4336-9946-792e4b3c83db" (UID: "3fbd9ebd-2c62-4336-9946-792e4b3c83db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.278511 4772 generic.go:334] "Generic (PLEG): container finished" podID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerID="6d2c4827d4cb49d5883df31ba20e437493bf839f2d63f4c0a3fdbedc0e23ec2c" exitCode=0 Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.278646 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b","Type":"ContainerDied","Data":"6d2c4827d4cb49d5883df31ba20e437493bf839f2d63f4c0a3fdbedc0e23ec2c"} Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.278677 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b","Type":"ContainerDied","Data":"7dfba9745ae45e12db2392840e83c2d00d1c654dbb39032ca5fbad4ebe6fdf68"} Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.278711 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dfba9745ae45e12db2392840e83c2d00d1c654dbb39032ca5fbad4ebe6fdf68" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.291270 4772 generic.go:334] "Generic (PLEG): container finished" podID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerID="96e2450010f46499b0808158113b617f9b05995cffcb394c9f26383aeac1a85f" exitCode=0 Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.291361 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea","Type":"ContainerDied","Data":"96e2450010f46499b0808158113b617f9b05995cffcb394c9f26383aeac1a85f"} Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293589 4772 generic.go:334] "Generic (PLEG): container finished" podID="020f49e7-c73f-460c-a068-75051e73cf90" containerID="8e6b764b39cbb94f171e4a3905646fc7d88ee4093584bde74bbc1f1676a19df8" exitCode=0 Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293634 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cfdd58ff7-mgd8m" event={"ID":"020f49e7-c73f-460c-a068-75051e73cf90","Type":"ContainerDied","Data":"8e6b764b39cbb94f171e4a3905646fc7d88ee4093584bde74bbc1f1676a19df8"} Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293733 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293768 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293853 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glancee71c-account-delete-64r2z" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293877 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293911 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293925 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.293965 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican3436-account-delete-4w4qv" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294006 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement8972-account-delete-pqznd" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294176 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron45e9-account-delete-9pn28" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294188 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294220 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294255 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi0c20-account-delete-g5lnb" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294289 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05903-account-delete-c7857" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294321 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder2d34-account-delete-7qhqb" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294371 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6876658948-bzr5z" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.294828 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.303228 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.314287 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f74827a-8354-492b-b09d-350768ba912d" (UID: "4f74827a-8354-492b-b09d-350768ba912d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.341244 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "3fbd9ebd-2c62-4336-9946-792e4b3c83db" (UID: "3fbd9ebd-2c62-4336-9946-792e4b3c83db"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.345699 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86139aa9-cd30-4d97-833e-a26562aebf92" (UID: "86139aa9-cd30-4d97-833e-a26562aebf92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.347260 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "86139aa9-cd30-4d97-833e-a26562aebf92" (UID: "86139aa9-cd30-4d97-833e-a26562aebf92"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.349351 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "86139aa9-cd30-4d97-833e-a26562aebf92" (UID: "86139aa9-cd30-4d97-833e-a26562aebf92"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.352287 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.353239 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.362357 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data" (OuterVolumeSpecName: "config-data") pod "86139aa9-cd30-4d97-833e-a26562aebf92" (UID: "86139aa9-cd30-4d97-833e-a26562aebf92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.365146 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.374655 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "4f74827a-8354-492b-b09d-350768ba912d" (UID: "4f74827a-8354-492b-b09d-350768ba912d"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.392744 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data" (OuterVolumeSpecName: "config-data") pod "17f0d5ca-99e5-47c6-9fdf-1932956cff3e" (UID: "17f0d5ca-99e5-47c6-9fdf-1932956cff3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.392878 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "4f74827a-8354-492b-b09d-350768ba912d" (UID: "4f74827a-8354-492b-b09d-350768ba912d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396252 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396283 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396296 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396311 4772 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396323 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396335 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396346 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396358 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396369 4772 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fbd9ebd-2c62-4336-9946-792e4b3c83db-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396381 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86139aa9-cd30-4d97-833e-a26562aebf92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396392 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74827a-8354-492b-b09d-350768ba912d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.396405 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f0d5ca-99e5-47c6-9fdf-1932956cff3e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.423109 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.425420 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="027dc32b-06dd-45bf-9aad-8e0c92b44a2b" path="/var/lib/kubelet/pods/027dc32b-06dd-45bf-9aad-8e0c92b44a2b/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.426022 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c1f859-42ed-484f-88cb-5349a7b64dda" path="/var/lib/kubelet/pods/13c1f859-42ed-484f-88cb-5349a7b64dda/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.426625 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" path="/var/lib/kubelet/pods/14ed2945-ef18-49de-9c18-679e011d3df5/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.428036 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c994b4f-e182-481a-a3ba-17dc9656c70c" path="/var/lib/kubelet/pods/1c994b4f-e182-481a-a3ba-17dc9656c70c/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.428624 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b26633-94ac-4439-b1ab-ab225d2e562b" path="/var/lib/kubelet/pods/33b26633-94ac-4439-b1ab-ab225d2e562b/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.429077 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d574ce-36bc-461c-a85a-738b71392ed6" path="/var/lib/kubelet/pods/45d574ce-36bc-461c-a85a-738b71392ed6/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.430059 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464d950a-e1bb-4efb-afdf-37b97a62a42c" path="/var/lib/kubelet/pods/464d950a-e1bb-4efb-afdf-37b97a62a42c/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.430659 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce40448-07b1-492e-bb7c-48aaf2bb3ce9" path="/var/lib/kubelet/pods/4ce40448-07b1-492e-bb7c-48aaf2bb3ce9/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.431695 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbd4e9d-9635-462d-abba-763daf0da369" path="/var/lib/kubelet/pods/5fbd4e9d-9635-462d-abba-763daf0da369/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.432432 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62770fd6-1000-4477-ac95-7a4eaa489732" path="/var/lib/kubelet/pods/62770fd6-1000-4477-ac95-7a4eaa489732/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.432986 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" path="/var/lib/kubelet/pods/b8b92f55-36d8-4358-9b57-734762f225c4/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.434370 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4828519-a6ad-4851-b9c2-134a12f373ac" path="/var/lib/kubelet/pods/c4828519-a6ad-4851-b9c2-134a12f373ac/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.434994 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb4a824-3a8a-4287-b206-94832099e15b" path="/var/lib/kubelet/pods/ddb4a824-3a8a-4287-b206-94832099e15b/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.435198 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.435633 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" path="/var/lib/kubelet/pods/ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.436706 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0a6ce78-ec31-4452-8a8b-e07a29d72200" path="/var/lib/kubelet/pods/f0a6ce78-ec31-4452-8a8b-e07a29d72200/volumes" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.477262 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-config-data" (OuterVolumeSpecName: "config-data") pod "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" (UID: "2c6ce3fb-4529-4856-a326-bb0e9ea0ae40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.481809 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.498012 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.498060 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.498069 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.552995 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599399 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599495 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-tls\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599559 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-confd\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599588 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-pod-info\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599645 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfb6z\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-kube-api-access-mfb6z\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599688 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-server-conf\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599766 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-erlang-cookie\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599818 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-plugins\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599917 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-plugins-conf\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.599951 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-erlang-cookie-secret\") pod \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\" (UID: \"5ce19f6b-73e1-48b9-810a-f9d97a14fe7b\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.613679 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.615812 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.619069 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.630754 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.635765 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-kube-api-access-mfb6z" (OuterVolumeSpecName: "kube-api-access-mfb6z") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "kube-api-access-mfb6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.639573 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data" (OuterVolumeSpecName: "config-data") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.641492 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.646376 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.661134 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.661219 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-pod-info" (OuterVolumeSpecName: "pod-info") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.684666 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-server-conf" (OuterVolumeSpecName: "server-conf") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.684697 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.701944 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702001 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702012 4772 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702079 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfb6z\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-kube-api-access-mfb6z\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702088 4772 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702097 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702105 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702113 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702123 4772 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.702143 4772 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.734197 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement8972-account-delete-pqznd"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.742846 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.770619 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement8972-account-delete-pqznd"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.770625 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" (UID: "5ce19f6b-73e1-48b9-810a-f9d97a14fe7b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.781560 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.798936 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805157 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-server-conf\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805215 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-internal-tls-certs\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805280 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-scripts\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805339 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-confd\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805362 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-tls\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805389 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-config-data\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805437 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-plugins\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805474 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-plugins-conf\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805504 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-credential-keys\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805531 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-erlang-cookie-secret\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805566 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-pod-info\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805620 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-erlang-cookie\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805676 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-public-tls-certs\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805704 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-fernet-keys\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805735 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtrw\" (UniqueName: \"kubernetes.io/projected/020f49e7-c73f-460c-a068-75051e73cf90-kube-api-access-ldtrw\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805765 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-combined-ca-bundle\") pod \"020f49e7-c73f-460c-a068-75051e73cf90\" (UID: \"020f49e7-c73f-460c-a068-75051e73cf90\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805822 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.805864 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld7l6\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-kube-api-access-ld7l6\") pod \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\" (UID: \"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea\") " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.806424 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.806449 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.809479 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.810236 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-kube-api-access-ld7l6" (OuterVolumeSpecName: "kube-api-access-ld7l6") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "kube-api-access-ld7l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.810994 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.813405 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.817113 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.820168 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.821587 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.823188 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.823241 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-pod-info" (OuterVolumeSpecName: "pod-info") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.823659 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020f49e7-c73f-460c-a068-75051e73cf90-kube-api-access-ldtrw" (OuterVolumeSpecName: "kube-api-access-ldtrw") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "kube-api-access-ldtrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.823951 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-scripts" (OuterVolumeSpecName: "scripts") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.826491 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.829943 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.833499 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.839445 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.845073 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glancee71c-account-delete-64r2z"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.849088 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.852669 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glancee71c-account-delete-64r2z"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.857267 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.859868 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder2d34-account-delete-7qhqb"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.864735 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder2d34-account-delete-7qhqb"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.901239 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-config-data" (OuterVolumeSpecName: "config-data") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.902520 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron45e9-account-delete-9pn28"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.910109 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron45e9-account-delete-9pn28"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911453 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911482 4772 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911492 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911501 4772 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911510 4772 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911518 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911527 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911536 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtrw\" (UniqueName: \"kubernetes.io/projected/020f49e7-c73f-460c-a068-75051e73cf90-kube-api-access-ldtrw\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911546 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911566 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911574 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld7l6\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-kube-api-access-ld7l6\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911583 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911591 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.911599 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.922614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data" (OuterVolumeSpecName: "config-data") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.923621 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-server-conf" (OuterVolumeSpecName: "server-conf") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.924652 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi0c20-account-delete-g5lnb"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.948213 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi0c20-account-delete-g5lnb"] Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.956262 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.968279 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "020f49e7-c73f-460c-a068-75051e73cf90" (UID: "020f49e7-c73f-460c-a068-75051e73cf90"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.972369 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 22 11:03:25 crc kubenswrapper[4772]: I1122 11:03:25.993113 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05903-account-delete-c7857"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.010225 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell05903-account-delete-c7857"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.013080 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.013109 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.013118 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.013128 4772 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.013138 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/020f49e7-c73f-460c-a068-75051e73cf90-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.016089 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican3436-account-delete-4w4qv"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.038693 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican3436-account-delete-4w4qv"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.051136 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6876658948-bzr5z"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.072274 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" (UID: "468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.072358 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6876658948-bzr5z"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.079186 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.086115 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.103109 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.103187 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.112105 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.114692 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.115504 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.124101 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.130186 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.305650 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cfdd58ff7-mgd8m" event={"ID":"020f49e7-c73f-460c-a068-75051e73cf90","Type":"ContainerDied","Data":"786aa308dc19bae115f095f03da57b3c3fcf2a13b679e54a1690daaa522f200c"} Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.305717 4772 scope.go:117] "RemoveContainer" containerID="8e6b764b39cbb94f171e4a3905646fc7d88ee4093584bde74bbc1f1676a19df8" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.305720 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cfdd58ff7-mgd8m" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.311026 4772 generic.go:334] "Generic (PLEG): container finished" podID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerID="6bdbd4c4929eabf6a133a2e818bd65ac8febe68d8843b6b4e67d0a024f4e743f" exitCode=0 Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.311779 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c79f8b65-qn7q9" event={"ID":"865ca651-4e53-4ac9-946d-31c1e485d91d","Type":"ContainerDied","Data":"6bdbd4c4929eabf6a133a2e818bd65ac8febe68d8843b6b4e67d0a024f4e743f"} Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.316944 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.318487 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.320924 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea","Type":"ContainerDied","Data":"771eca904cfa49996304858353ea6f3211f3d2940eaef953cb857e5108369f92"} Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.321008 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone7a1c-account-delete-b8xp2" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.352658 4772 scope.go:117] "RemoveContainer" containerID="96e2450010f46499b0808158113b617f9b05995cffcb394c9f26383aeac1a85f" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.367362 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cfdd58ff7-mgd8m"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.374360 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cfdd58ff7-mgd8m"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.380942 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.386145 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.390711 4772 scope.go:117] "RemoveContainer" containerID="e7defe2138029a1b0c0f3a9b0ab82ba765b27b46bc0e907fdcfcb9894a4cef37" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.420742 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone7a1c-account-delete-b8xp2"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.431878 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone7a1c-account-delete-b8xp2"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.437468 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.442555 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.523968 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdqvq\" (UniqueName: \"kubernetes.io/projected/3d9c0ba3-ac92-4821-acb3-fd40e750bdae-kube-api-access-xdqvq\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.645681 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.828397 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-public-tls-certs\") pod \"865ca651-4e53-4ac9-946d-31c1e485d91d\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.828723 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-combined-ca-bundle\") pod \"865ca651-4e53-4ac9-946d-31c1e485d91d\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.828765 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72s96\" (UniqueName: \"kubernetes.io/projected/865ca651-4e53-4ac9-946d-31c1e485d91d-kube-api-access-72s96\") pod \"865ca651-4e53-4ac9-946d-31c1e485d91d\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.828790 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-config\") pod \"865ca651-4e53-4ac9-946d-31c1e485d91d\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.828810 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-ovndb-tls-certs\") pod \"865ca651-4e53-4ac9-946d-31c1e485d91d\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.828874 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-httpd-config\") pod \"865ca651-4e53-4ac9-946d-31c1e485d91d\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.828905 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-internal-tls-certs\") pod \"865ca651-4e53-4ac9-946d-31c1e485d91d\" (UID: \"865ca651-4e53-4ac9-946d-31c1e485d91d\") " Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.833514 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865ca651-4e53-4ac9-946d-31c1e485d91d-kube-api-access-72s96" (OuterVolumeSpecName: "kube-api-access-72s96") pod "865ca651-4e53-4ac9-946d-31c1e485d91d" (UID: "865ca651-4e53-4ac9-946d-31c1e485d91d"). InnerVolumeSpecName "kube-api-access-72s96". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.834003 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "865ca651-4e53-4ac9-946d-31c1e485d91d" (UID: "865ca651-4e53-4ac9-946d-31c1e485d91d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.870915 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "865ca651-4e53-4ac9-946d-31c1e485d91d" (UID: "865ca651-4e53-4ac9-946d-31c1e485d91d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.871764 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "865ca651-4e53-4ac9-946d-31c1e485d91d" (UID: "865ca651-4e53-4ac9-946d-31c1e485d91d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.872470 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-config" (OuterVolumeSpecName: "config") pod "865ca651-4e53-4ac9-946d-31c1e485d91d" (UID: "865ca651-4e53-4ac9-946d-31c1e485d91d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.873245 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "865ca651-4e53-4ac9-946d-31c1e485d91d" (UID: "865ca651-4e53-4ac9-946d-31c1e485d91d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.887702 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": dial tcp 10.217.0.204:8775: i/o timeout" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.888134 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.895987 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "865ca651-4e53-4ac9-946d-31c1e485d91d" (UID: "865ca651-4e53-4ac9-946d-31c1e485d91d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.931340 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.931601 4772 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.931680 4772 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.931763 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.931833 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72s96\" (UniqueName: \"kubernetes.io/projected/865ca651-4e53-4ac9-946d-31c1e485d91d-kube-api-access-72s96\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.931902 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:26 crc kubenswrapper[4772]: I1122 11:03:26.931982 4772 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/865ca651-4e53-4ac9-946d-31c1e485d91d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.335423 4772 generic.go:334] "Generic (PLEG): container finished" podID="51f59313-1e0d-4877-9141-c32a7f72f84f" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" exitCode=0 Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.335486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51f59313-1e0d-4877-9141-c32a7f72f84f","Type":"ContainerDied","Data":"e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558"} Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.335516 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51f59313-1e0d-4877-9141-c32a7f72f84f","Type":"ContainerDied","Data":"6ceddf383a371a1e227e0abb2847453f9f6994bf12fc4eac453c227718d98eba"} Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.335528 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ceddf383a371a1e227e0abb2847453f9f6994bf12fc4eac453c227718d98eba" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.341663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c79f8b65-qn7q9" event={"ID":"865ca651-4e53-4ac9-946d-31c1e485d91d","Type":"ContainerDied","Data":"f2fea7b5487f2dd96a6855359cfed99ba37dc33f03f66fdb843a16e9d7c69fcc"} Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.341712 4772 scope.go:117] "RemoveContainer" containerID="89b92e0a1e681be8f4f78a508d0ebcba29af7864b3c2db95e3d23d573dc85c86" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.341714 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c79f8b65-qn7q9" Nov 22 11:03:27 crc kubenswrapper[4772]: E1122 11:03:27.360998 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 11:03:27 crc kubenswrapper[4772]: E1122 11:03:27.365397 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 11:03:27 crc kubenswrapper[4772]: E1122 11:03:27.367026 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 11:03:27 crc kubenswrapper[4772]: E1122 11:03:27.367094 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="c9a852d5-2258-45b4-9076-95740059eecd" containerName="galera" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.370328 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.380214 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8c79f8b65-qn7q9"] Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.385910 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8c79f8b65-qn7q9"] Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.389121 4772 scope.go:117] "RemoveContainer" containerID="6bdbd4c4929eabf6a133a2e818bd65ac8febe68d8843b6b4e67d0a024f4e743f" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.430425 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020f49e7-c73f-460c-a068-75051e73cf90" path="/var/lib/kubelet/pods/020f49e7-c73f-460c-a068-75051e73cf90/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.431143 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" path="/var/lib/kubelet/pods/17f0d5ca-99e5-47c6-9fdf-1932956cff3e/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.432227 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" path="/var/lib/kubelet/pods/2c6ce3fb-4529-4856-a326-bb0e9ea0ae40/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.433901 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30007403-085b-4874-88b7-8b27426fd4f7" path="/var/lib/kubelet/pods/30007403-085b-4874-88b7-8b27426fd4f7/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.434399 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9c0ba3-ac92-4821-acb3-fd40e750bdae" path="/var/lib/kubelet/pods/3d9c0ba3-ac92-4821-acb3-fd40e750bdae/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.434831 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbd9ebd-2c62-4336-9946-792e4b3c83db" path="/var/lib/kubelet/pods/3fbd9ebd-2c62-4336-9946-792e4b3c83db/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.436235 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" path="/var/lib/kubelet/pods/468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.436827 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f74827a-8354-492b-b09d-350768ba912d" path="/var/lib/kubelet/pods/4f74827a-8354-492b-b09d-350768ba912d/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.437406 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fb678a-814f-4328-8b49-9226512bf10e" path="/var/lib/kubelet/pods/56fb678a-814f-4328-8b49-9226512bf10e/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.438713 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" path="/var/lib/kubelet/pods/5ce19f6b-73e1-48b9-810a-f9d97a14fe7b/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.439254 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706b8e5a-87b8-429e-aea7-e7e5f161182f" path="/var/lib/kubelet/pods/706b8e5a-87b8-429e-aea7-e7e5f161182f/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.439667 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d122410-121a-47cd-9465-e5c6f85cf2b2" path="/var/lib/kubelet/pods/7d122410-121a-47cd-9465-e5c6f85cf2b2/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.440723 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" path="/var/lib/kubelet/pods/86139aa9-cd30-4d97-833e-a26562aebf92/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.447805 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" path="/var/lib/kubelet/pods/865ca651-4e53-4ac9-946d-31c1e485d91d/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.448618 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" path="/var/lib/kubelet/pods/93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.449819 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941a38a8-56e0-4061-8891-0cd3815477a4" path="/var/lib/kubelet/pods/941a38a8-56e0-4061-8891-0cd3815477a4/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.450809 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aeb3608-353b-4b44-8797-46affdc587a7" path="/var/lib/kubelet/pods/9aeb3608-353b-4b44-8797-46affdc587a7/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.451362 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a147b4-4445-4f7b-b22f-97db02340306" path="/var/lib/kubelet/pods/a0a147b4-4445-4f7b-b22f-97db02340306/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.461550 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb612c10-4436-4c79-b990-cbc7b403eed5" path="/var/lib/kubelet/pods/cb612c10-4436-4c79-b990-cbc7b403eed5/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.462273 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e723031c-0772-49f7-ba16-f635ddd53dcc" path="/var/lib/kubelet/pods/e723031c-0772-49f7-ba16-f635ddd53dcc/volumes" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.540619 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-config-data\") pod \"51f59313-1e0d-4877-9141-c32a7f72f84f\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.540739 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-combined-ca-bundle\") pod \"51f59313-1e0d-4877-9141-c32a7f72f84f\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.540775 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vs4g\" (UniqueName: \"kubernetes.io/projected/51f59313-1e0d-4877-9141-c32a7f72f84f-kube-api-access-2vs4g\") pod \"51f59313-1e0d-4877-9141-c32a7f72f84f\" (UID: \"51f59313-1e0d-4877-9141-c32a7f72f84f\") " Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.549216 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f59313-1e0d-4877-9141-c32a7f72f84f-kube-api-access-2vs4g" (OuterVolumeSpecName: "kube-api-access-2vs4g") pod "51f59313-1e0d-4877-9141-c32a7f72f84f" (UID: "51f59313-1e0d-4877-9141-c32a7f72f84f"). InnerVolumeSpecName "kube-api-access-2vs4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.561735 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-config-data" (OuterVolumeSpecName: "config-data") pod "51f59313-1e0d-4877-9141-c32a7f72f84f" (UID: "51f59313-1e0d-4877-9141-c32a7f72f84f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.565869 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51f59313-1e0d-4877-9141-c32a7f72f84f" (UID: "51f59313-1e0d-4877-9141-c32a7f72f84f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.642928 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.642968 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f59313-1e0d-4877-9141-c32a7f72f84f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:27 crc kubenswrapper[4772]: I1122 11:03:27.642980 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vs4g\" (UniqueName: \"kubernetes.io/projected/51f59313-1e0d-4877-9141-c32a7f72f84f-kube-api-access-2vs4g\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.329468 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.357168 4772 generic.go:334] "Generic (PLEG): container finished" podID="c9a852d5-2258-45b4-9076-95740059eecd" containerID="fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" exitCode=0 Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.357246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9a852d5-2258-45b4-9076-95740059eecd","Type":"ContainerDied","Data":"fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d"} Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.357282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c9a852d5-2258-45b4-9076-95740059eecd","Type":"ContainerDied","Data":"7cfebb4275ff9eec3c648197da8639ff10c44b3148b4fdcfcb116d3a32506487"} Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.357302 4772 scope.go:117] "RemoveContainer" containerID="fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.357431 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.360882 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.392463 4772 scope.go:117] "RemoveContainer" containerID="5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.431449 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.438479 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.438887 4772 scope.go:117] "RemoveContainer" containerID="fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" Nov 22 11:03:28 crc kubenswrapper[4772]: E1122 11:03:28.439446 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d\": container with ID starting with fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d not found: ID does not exist" containerID="fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.439481 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d"} err="failed to get container status \"fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d\": rpc error: code = NotFound desc = could not find container \"fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d\": container with ID starting with fb76271eaf5dc62949aa9d4033c28c01d565faf421a33ec480046d5a9c4ab96d not found: ID does not exist" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.439505 4772 scope.go:117] "RemoveContainer" containerID="5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81" Nov 22 11:03:28 crc kubenswrapper[4772]: E1122 11:03:28.439965 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81\": container with ID starting with 5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81 not found: ID does not exist" containerID="5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.439993 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81"} err="failed to get container status \"5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81\": rpc error: code = NotFound desc = could not find container \"5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81\": container with ID starting with 5bfc019a03efa0f0e143e3abbd577a79a47df39696c6f4570d51c4637ca56d81 not found: ID does not exist" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465431 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-operator-scripts\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465610 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-config-data-default\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465673 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9a852d5-2258-45b4-9076-95740059eecd-config-data-generated\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465697 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-kolla-config\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465736 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465879 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-secrets\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465910 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfws4\" (UniqueName: \"kubernetes.io/projected/c9a852d5-2258-45b4-9076-95740059eecd-kube-api-access-vfws4\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465933 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-galera-tls-certs\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.465961 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-combined-ca-bundle\") pod \"c9a852d5-2258-45b4-9076-95740059eecd\" (UID: \"c9a852d5-2258-45b4-9076-95740059eecd\") " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.467775 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.468240 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.468551 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.468744 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9a852d5-2258-45b4-9076-95740059eecd-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.474345 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-secrets" (OuterVolumeSpecName: "secrets") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.475603 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a852d5-2258-45b4-9076-95740059eecd-kube-api-access-vfws4" (OuterVolumeSpecName: "kube-api-access-vfws4") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "kube-api-access-vfws4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.481147 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.492371 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.527550 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "c9a852d5-2258-45b4-9076-95740059eecd" (UID: "c9a852d5-2258-45b4-9076-95740059eecd"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568203 4772 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568246 4772 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568262 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfws4\" (UniqueName: \"kubernetes.io/projected/c9a852d5-2258-45b4-9076-95740059eecd-kube-api-access-vfws4\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568272 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a852d5-2258-45b4-9076-95740059eecd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568283 4772 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568293 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568304 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c9a852d5-2258-45b4-9076-95740059eecd-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568315 4772 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c9a852d5-2258-45b4-9076-95740059eecd-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.568340 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.582961 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.670215 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.688908 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 11:03:28 crc kubenswrapper[4772]: I1122 11:03:28.693776 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 11:03:29 crc kubenswrapper[4772]: I1122 11:03:29.185839 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="3fbd9ebd-2c62-4336-9946-792e4b3c83db" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: i/o timeout" Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.418555 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.419881 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.419954 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.420806 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.420851 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:29 crc kubenswrapper[4772]: I1122 11:03:29.424372 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f59313-1e0d-4877-9141-c32a7f72f84f" path="/var/lib/kubelet/pods/51f59313-1e0d-4877-9141-c32a7f72f84f/volumes" Nov 22 11:03:29 crc kubenswrapper[4772]: I1122 11:03:29.425163 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a852d5-2258-45b4-9076-95740059eecd" path="/var/lib/kubelet/pods/c9a852d5-2258-45b4-9076-95740059eecd/volumes" Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.427447 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.431199 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:29 crc kubenswrapper[4772]: E1122 11:03:29.431240 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.418134 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.418769 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.419103 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.419205 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.419249 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.420292 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.421564 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:34 crc kubenswrapper[4772]: E1122 11:03:34.421615 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.418097 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.419759 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.419884 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.420546 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.420602 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.421415 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.422800 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:39 crc kubenswrapper[4772]: E1122 11:03:39.422845 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.418413 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.419012 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.419236 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.419297 4772 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.419537 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.420695 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.421821 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 11:03:44 crc kubenswrapper[4772]: E1122 11:03:44.421867 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qvtmm" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.468792 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qvtmm_5eaf9da0-a00f-4251-ae11-31ccc3e237e1/ovs-vswitchd/0.log" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.470483 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.532993 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qvtmm_5eaf9da0-a00f-4251-ae11-31ccc3e237e1/ovs-vswitchd/0.log" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.533614 4772 generic.go:334] "Generic (PLEG): container finished" podID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" exitCode=137 Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.533653 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerDied","Data":"b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea"} Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.533690 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qvtmm" event={"ID":"5eaf9da0-a00f-4251-ae11-31ccc3e237e1","Type":"ContainerDied","Data":"2f1c4527ea91dc9820b66da4572441c5dffdde6775cce4e11bab19a3a1e11f20"} Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.533697 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qvtmm" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.533709 4772 scope.go:117] "RemoveContainer" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.564913 4772 scope.go:117] "RemoveContainer" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.586324 4772 scope.go:117] "RemoveContainer" containerID="8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.616220 4772 scope.go:117] "RemoveContainer" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" Nov 22 11:03:46 crc kubenswrapper[4772]: E1122 11:03:46.616695 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea\": container with ID starting with b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea not found: ID does not exist" containerID="b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.616744 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea"} err="failed to get container status \"b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea\": rpc error: code = NotFound desc = could not find container \"b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea\": container with ID starting with b453a8cf27cf323a7ca0a34df6781dcd755a821c7866c6dbdecdad4ea153f3ea not found: ID does not exist" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.616776 4772 scope.go:117] "RemoveContainer" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" Nov 22 11:03:46 crc kubenswrapper[4772]: E1122 11:03:46.617200 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840\": container with ID starting with b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 not found: ID does not exist" containerID="b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.617251 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840"} err="failed to get container status \"b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840\": rpc error: code = NotFound desc = could not find container \"b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840\": container with ID starting with b35066231a06831ac6cbdd40d94a47f17dbd0bd89c978d8091d18097c6bdc840 not found: ID does not exist" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.617274 4772 scope.go:117] "RemoveContainer" containerID="8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5" Nov 22 11:03:46 crc kubenswrapper[4772]: E1122 11:03:46.617540 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5\": container with ID starting with 8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5 not found: ID does not exist" containerID="8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.617562 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5"} err="failed to get container status \"8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5\": rpc error: code = NotFound desc = could not find container \"8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5\": container with ID starting with 8c396a5866de12ccf9e258373d3e13a6be2d4d04b4db9ac1007cdb810eeaa0d5 not found: ID does not exist" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.646845 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-scripts\") pod \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.646890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-log\") pod \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.646961 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-lib\") pod \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.646991 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqvzv\" (UniqueName: \"kubernetes.io/projected/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-kube-api-access-bqvzv\") pod \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647077 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-log" (OuterVolumeSpecName: "var-log") pod "5eaf9da0-a00f-4251-ae11-31ccc3e237e1" (UID: "5eaf9da0-a00f-4251-ae11-31ccc3e237e1"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647101 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-lib" (OuterVolumeSpecName: "var-lib") pod "5eaf9da0-a00f-4251-ae11-31ccc3e237e1" (UID: "5eaf9da0-a00f-4251-ae11-31ccc3e237e1"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647099 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-run\") pod \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647122 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-run" (OuterVolumeSpecName: "var-run") pod "5eaf9da0-a00f-4251-ae11-31ccc3e237e1" (UID: "5eaf9da0-a00f-4251-ae11-31ccc3e237e1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647175 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-etc-ovs\") pod \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\" (UID: \"5eaf9da0-a00f-4251-ae11-31ccc3e237e1\") " Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647261 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "5eaf9da0-a00f-4251-ae11-31ccc3e237e1" (UID: "5eaf9da0-a00f-4251-ae11-31ccc3e237e1"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647676 4772 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-lib\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647700 4772 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647711 4772 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.647722 4772 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-var-log\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.648087 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-scripts" (OuterVolumeSpecName: "scripts") pod "5eaf9da0-a00f-4251-ae11-31ccc3e237e1" (UID: "5eaf9da0-a00f-4251-ae11-31ccc3e237e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.652139 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-kube-api-access-bqvzv" (OuterVolumeSpecName: "kube-api-access-bqvzv") pod "5eaf9da0-a00f-4251-ae11-31ccc3e237e1" (UID: "5eaf9da0-a00f-4251-ae11-31ccc3e237e1"). InnerVolumeSpecName "kube-api-access-bqvzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.749228 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqvzv\" (UniqueName: \"kubernetes.io/projected/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-kube-api-access-bqvzv\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.749257 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5eaf9da0-a00f-4251-ae11-31ccc3e237e1-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.869562 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qvtmm"] Nov 22 11:03:46 crc kubenswrapper[4772]: I1122 11:03:46.874654 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-qvtmm"] Nov 22 11:03:47 crc kubenswrapper[4772]: I1122 11:03:47.423232 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" path="/var/lib/kubelet/pods/5eaf9da0-a00f-4251-ae11-31ccc3e237e1/volumes" Nov 22 11:03:47 crc kubenswrapper[4772]: I1122 11:03:47.566436 4772 generic.go:334] "Generic (PLEG): container finished" podID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerID="aa61d5f2be67c3162272b709160c16d2b8cb7b6652be46a7ec677336065aa1ac" exitCode=137 Nov 22 11:03:47 crc kubenswrapper[4772]: I1122 11:03:47.566532 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"aa61d5f2be67c3162272b709160c16d2b8cb7b6652be46a7ec677336065aa1ac"} Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.000518 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.168668 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-cache\") pod \"354e52a7-830a-43a1-ad15-a13fe2a07222\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.168719 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl22s\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-kube-api-access-kl22s\") pod \"354e52a7-830a-43a1-ad15-a13fe2a07222\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.168743 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") pod \"354e52a7-830a-43a1-ad15-a13fe2a07222\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.168817 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-lock\") pod \"354e52a7-830a-43a1-ad15-a13fe2a07222\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.168859 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"354e52a7-830a-43a1-ad15-a13fe2a07222\" (UID: \"354e52a7-830a-43a1-ad15-a13fe2a07222\") " Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.169381 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-cache" (OuterVolumeSpecName: "cache") pod "354e52a7-830a-43a1-ad15-a13fe2a07222" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.169600 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-lock" (OuterVolumeSpecName: "lock") pod "354e52a7-830a-43a1-ad15-a13fe2a07222" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.172996 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-kube-api-access-kl22s" (OuterVolumeSpecName: "kube-api-access-kl22s") pod "354e52a7-830a-43a1-ad15-a13fe2a07222" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222"). InnerVolumeSpecName "kube-api-access-kl22s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.176621 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "354e52a7-830a-43a1-ad15-a13fe2a07222" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.176789 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "354e52a7-830a-43a1-ad15-a13fe2a07222" (UID: "354e52a7-830a-43a1-ad15-a13fe2a07222"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.271035 4772 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-lock\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.271372 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.271494 4772 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/354e52a7-830a-43a1-ad15-a13fe2a07222-cache\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.271609 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl22s\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-kube-api-access-kl22s\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.271720 4772 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/354e52a7-830a-43a1-ad15-a13fe2a07222-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.285206 4772 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.373654 4772 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.583479 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"354e52a7-830a-43a1-ad15-a13fe2a07222","Type":"ContainerDied","Data":"9fc3b7ecf199fcf85e664ba067edbc8415aed4c149de9e4cb2b2d5bf7ab8f75d"} Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.583539 4772 scope.go:117] "RemoveContainer" containerID="aa61d5f2be67c3162272b709160c16d2b8cb7b6652be46a7ec677336065aa1ac" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.583620 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.613306 4772 scope.go:117] "RemoveContainer" containerID="d6a9ec495836f1834c42245ba492aa4e9ebb76dac50305dd8790c68f739b9277" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.622950 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.628452 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.639959 4772 scope.go:117] "RemoveContainer" containerID="bbe601a3871553a3d7d70b6b470ceebc522ddd2ed4d9823f1a644a888ecc03ff" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.655855 4772 scope.go:117] "RemoveContainer" containerID="5b75dbfffe8e1c9d76456ff93f1c9c0f4bf16b63767e3e50ca83e8220c802021" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.672914 4772 scope.go:117] "RemoveContainer" containerID="20bfb81f40cbc7f6e93279c153402c5ff3a9a24099eb68f5d5aa22251bdfd63e" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.692807 4772 scope.go:117] "RemoveContainer" containerID="2aa38c5be9a613db8f9237edd2803eb9c3a6730dcaecfe363e7fdf603819c43c" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.722405 4772 scope.go:117] "RemoveContainer" containerID="4bb4a8713445b470473fcae6ce67f357ca8b02ad81def05ee9cb94ebea5ebf50" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.740833 4772 scope.go:117] "RemoveContainer" containerID="04dbfb695ea067220096d958b7c9f722332cd1273836354417eb1f3cad0efc67" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.758277 4772 scope.go:117] "RemoveContainer" containerID="dd0dd2fc38d88b49b28baf152ee2dced29cbe3336d9498d4ade9ee3c9adf12ee" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.775776 4772 scope.go:117] "RemoveContainer" containerID="71199f24f24db6b2a98a516ab206a62b13cd49e1da1c3a11e4c911c568e4f32b" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.792959 4772 scope.go:117] "RemoveContainer" containerID="d99d87346c3370f3f3fba6fb00e3db55bfc3eab865e86ac6c50b67b5247c7837" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.809958 4772 scope.go:117] "RemoveContainer" containerID="6aeb1397a245f1d928583c71247392a935b287c4678959e266ee42e4285547dd" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.827072 4772 scope.go:117] "RemoveContainer" containerID="e643a9463572f69ee79ae91f043796a6db50894ddf2c847ea4435b5d3f1f8d4b" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.843151 4772 scope.go:117] "RemoveContainer" containerID="05273857f1de6f10d05b451d131edb562b0d717aa0fb09e91569b386fad68432" Nov 22 11:03:48 crc kubenswrapper[4772]: I1122 11:03:48.860677 4772 scope.go:117] "RemoveContainer" containerID="6c148c107a290e32467067529f8845e2cb396a10c79f269871d8b6dfe85c8538" Nov 22 11:03:49 crc kubenswrapper[4772]: I1122 11:03:49.389519 4772 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod25144c09-6edb-4bd3-89b2-99db486e733b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod25144c09-6edb-4bd3-89b2-99db486e733b] : Timed out while waiting for systemd to remove kubepods-besteffort-pod25144c09_6edb_4bd3_89b2_99db486e733b.slice" Nov 22 11:03:49 crc kubenswrapper[4772]: E1122 11:03:49.389597 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod25144c09-6edb-4bd3-89b2-99db486e733b] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod25144c09-6edb-4bd3-89b2-99db486e733b] : Timed out while waiting for systemd to remove kubepods-besteffort-pod25144c09_6edb_4bd3_89b2_99db486e733b.slice" pod="openstack/ovn-controller-267ms" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" Nov 22 11:03:49 crc kubenswrapper[4772]: I1122 11:03:49.399421 4772 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod25144c09-6edb-4bd3-89b2-99db486e733b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod25144c09-6edb-4bd3-89b2-99db486e733b] : Timed out while waiting for systemd to remove kubepods-besteffort-pod25144c09_6edb_4bd3_89b2_99db486e733b.slice" Nov 22 11:03:49 crc kubenswrapper[4772]: I1122 11:03:49.423375 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" path="/var/lib/kubelet/pods/354e52a7-830a-43a1-ad15-a13fe2a07222/volumes" Nov 22 11:03:49 crc kubenswrapper[4772]: I1122 11:03:49.601657 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-267ms" Nov 22 11:03:49 crc kubenswrapper[4772]: I1122 11:03:49.630780 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-267ms"] Nov 22 11:03:49 crc kubenswrapper[4772]: I1122 11:03:49.636039 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-267ms"] Nov 22 11:03:51 crc kubenswrapper[4772]: I1122 11:03:51.425630 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25144c09-6edb-4bd3-89b2-99db486e733b" path="/var/lib/kubelet/pods/25144c09-6edb-4bd3-89b2-99db486e733b/volumes" Nov 22 11:03:53 crc kubenswrapper[4772]: I1122 11:03:53.963166 4772 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod464d950a-e1bb-4efb-afdf-37b97a62a42c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod464d950a-e1bb-4efb-afdf-37b97a62a42c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod464d950a_e1bb_4efb_afdf_37b97a62a42c.slice" Nov 22 11:03:53 crc kubenswrapper[4772]: I1122 11:03:53.989539 4772 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod13c1f859-42ed-484f-88cb-5349a7b64dda"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod13c1f859-42ed-484f-88cb-5349a7b64dda] : Timed out while waiting for systemd to remove kubepods-besteffort-pod13c1f859_42ed_484f_88cb_5349a7b64dda.slice" Nov 22 11:03:54 crc kubenswrapper[4772]: I1122 11:03:54.418086 4772 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc4828519-a6ad-4851-b9c2-134a12f373ac"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc4828519-a6ad-4851-b9c2-134a12f373ac] : Timed out while waiting for systemd to remove kubepods-besteffort-podc4828519_a6ad_4851_b9c2_134a12f373ac.slice" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.589629 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kh47g"] Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.589969 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.589988 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590009 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-expirer" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590017 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-expirer" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590025 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-central-agent" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590033 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-central-agent" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590060 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerName="rabbitmq" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590068 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerName="rabbitmq" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590077 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590084 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-server" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590096 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="ovn-northd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590103 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="ovn-northd" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590114 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="proxy-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590122 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="proxy-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590134 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-reaper" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590142 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-reaper" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590203 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="sg-core" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590214 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="sg-core" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590259 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590267 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-server" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590277 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590287 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-log" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590296 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a852d5-2258-45b4-9076-95740059eecd" containerName="mysql-bootstrap" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590304 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a852d5-2258-45b4-9076-95740059eecd" containerName="mysql-bootstrap" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590317 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590325 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-log" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590334 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590343 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590358 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590366 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590375 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d122410-121a-47cd-9465-e5c6f85cf2b2" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590387 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d122410-121a-47cd-9465-e5c6f85cf2b2" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590400 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-metadata" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590409 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-metadata" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590424 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30007403-085b-4874-88b7-8b27426fd4f7" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590432 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="30007403-085b-4874-88b7-8b27426fd4f7" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590444 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590452 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-log" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590462 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590470 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590484 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb612c10-4436-4c79-b990-cbc7b403eed5" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590492 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb612c10-4436-4c79-b990-cbc7b403eed5" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590509 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590516 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590532 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aeb3608-353b-4b44-8797-46affdc587a7" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590540 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aeb3608-353b-4b44-8797-46affdc587a7" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590553 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590561 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-api" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590576 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a852d5-2258-45b4-9076-95740059eecd" containerName="galera" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590584 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a852d5-2258-45b4-9076-95740059eecd" containerName="galera" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590598 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server-init" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590606 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server-init" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590617 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706b8e5a-87b8-429e-aea7-e7e5f161182f" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590624 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="706b8e5a-87b8-429e-aea7-e7e5f161182f" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590638 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590646 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590656 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020f49e7-c73f-460c-a068-75051e73cf90" containerName="keystone-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590664 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="020f49e7-c73f-460c-a068-75051e73cf90" containerName="keystone-api" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590674 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-updater" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590681 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-updater" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590691 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb4a824-3a8a-4287-b206-94832099e15b" containerName="kube-state-metrics" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590699 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb4a824-3a8a-4287-b206-94832099e15b" containerName="kube-state-metrics" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590709 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-notification-agent" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590717 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-notification-agent" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590729 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b26633-94ac-4439-b1ab-ab225d2e562b" containerName="nova-cell0-conductor-conductor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590736 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b26633-94ac-4439-b1ab-ab225d2e562b" containerName="nova-cell0-conductor-conductor" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590749 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a147b4-4445-4f7b-b22f-97db02340306" containerName="nova-cell1-conductor-conductor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590756 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a147b4-4445-4f7b-b22f-97db02340306" containerName="nova-cell1-conductor-conductor" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590767 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590774 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590783 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e723031c-0772-49f7-ba16-f635ddd53dcc" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590790 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e723031c-0772-49f7-ba16-f635ddd53dcc" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590801 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590808 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590821 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590827 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590835 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbd9ebd-2c62-4336-9946-792e4b3c83db" containerName="memcached" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590842 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbd9ebd-2c62-4336-9946-792e4b3c83db" containerName="memcached" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590854 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f59313-1e0d-4877-9141-c32a7f72f84f" containerName="nova-scheduler-scheduler" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590861 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f59313-1e0d-4877-9141-c32a7f72f84f" containerName="nova-scheduler-scheduler" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590874 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941a38a8-56e0-4061-8891-0cd3815477a4" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590881 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="941a38a8-56e0-4061-8891-0cd3815477a4" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590892 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerName="setup-container" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590897 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerName="setup-container" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590906 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590912 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590919 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590924 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590932 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="rsync" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590937 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="rsync" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590945 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerName="setup-container" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590950 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerName="setup-container" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590958 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590965 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590975 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="swift-recon-cron" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590981 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="swift-recon-cron" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.590989 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.590996 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-log" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591006 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerName="rabbitmq" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591011 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerName="rabbitmq" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591020 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591025 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591033 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-updater" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591038 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-updater" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591112 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591120 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591131 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591140 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591179 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591187 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591198 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="openstack-network-exporter" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591206 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="openstack-network-exporter" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591214 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591220 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-api" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591229 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591235 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-server" Nov 22 11:03:56 crc kubenswrapper[4772]: E1122 11:03:56.591243 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591248 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591414 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591425 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591446 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="30007403-085b-4874-88b7-8b27426fd4f7" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591459 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591465 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591472 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591480 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="swift-recon-cron" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591494 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="proxy-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591504 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ed2945-ef18-49de-9c18-679e011d3df5" containerName="glance-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591514 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovsdb-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591522 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591534 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="86139aa9-cd30-4d97-833e-a26562aebf92" containerName="barbican-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591542 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-metadata" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591556 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="468ec0b6-bd36-4ff5-afe9-a6aa7bb4f5ea" containerName="rabbitmq" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591568 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e723031c-0772-49f7-ba16-f635ddd53dcc" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591580 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-reaper" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591591 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="706b8e5a-87b8-429e-aea7-e7e5f161182f" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591600 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591611 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb4a824-3a8a-4287-b206-94832099e15b" containerName="kube-state-metrics" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591622 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591636 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591647 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591656 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b26633-94ac-4439-b1ab-ab225d2e562b" containerName="nova-cell0-conductor-conductor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591666 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="020f49e7-c73f-460c-a068-75051e73cf90" containerName="keystone-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591677 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="sg-core" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591689 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591700 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="rsync" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591711 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="941a38a8-56e0-4061-8891-0cd3815477a4" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591721 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b92f55-36d8-4358-9b57-734762f225c4" containerName="nova-metadata-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591732 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-expirer" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591743 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-auditor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591752 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eaf9da0-a00f-4251-ae11-31ccc3e237e1" containerName="ovs-vswitchd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591761 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb612c10-4436-4c79-b990-cbc7b403eed5" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591769 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591781 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="865ca651-4e53-4ac9-946d-31c1e485d91d" containerName="neutron-httpd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591791 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce19f6b-73e1-48b9-810a-f9d97a14fe7b" containerName="rabbitmq" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591800 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="openstack-network-exporter" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591807 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f59313-1e0d-4877-9141-c32a7f72f84f" containerName="nova-scheduler-scheduler" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591822 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aeb3608-353b-4b44-8797-46affdc587a7" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591835 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fdf03c-7fe4-4f0f-ac9f-a5ea6783fc50" containerName="glance-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591847 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-central-agent" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591860 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbd9ebd-2c62-4336-9946-792e4b3c83db" containerName="memcached" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591868 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591874 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a147b4-4445-4f7b-b22f-97db02340306" containerName="nova-cell1-conductor-conductor" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591885 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-replicator" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591896 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="account-server" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591905 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a852d5-2258-45b4-9076-95740059eecd" containerName="galera" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591915 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6ce3fb-4529-4856-a326-bb0e9ea0ae40" containerName="ceilometer-notification-agent" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591923 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d122410-121a-47cd-9465-e5c6f85cf2b2" containerName="mariadb-account-delete" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591934 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f74827a-8354-492b-b09d-350768ba912d" containerName="ovn-northd" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591942 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="container-updater" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591951 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e52a7-830a-43a1-ad15-a13fe2a07222" containerName="object-updater" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591963 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4f25b4-f811-4d5a-8d3e-00e8adbd5bc7" containerName="placement-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591970 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fb678a-814f-4328-8b49-9226512bf10e" containerName="nova-api-log" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.591980 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f0d5ca-99e5-47c6-9fdf-1932956cff3e" containerName="cinder-api" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.593154 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.612857 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kh47g"] Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.717451 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-utilities\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.717542 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4rx\" (UniqueName: \"kubernetes.io/projected/3c4e5c22-7201-4d38-9562-9775f618f3e1-kube-api-access-jp4rx\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.717840 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-catalog-content\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.819786 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-catalog-content\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.819861 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-utilities\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.819908 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4rx\" (UniqueName: \"kubernetes.io/projected/3c4e5c22-7201-4d38-9562-9775f618f3e1-kube-api-access-jp4rx\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.820296 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-catalog-content\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.820540 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-utilities\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.841156 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4rx\" (UniqueName: \"kubernetes.io/projected/3c4e5c22-7201-4d38-9562-9775f618f3e1-kube-api-access-jp4rx\") pod \"certified-operators-kh47g\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:56 crc kubenswrapper[4772]: I1122 11:03:56.931812 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:03:57 crc kubenswrapper[4772]: I1122 11:03:57.178635 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kh47g"] Nov 22 11:03:57 crc kubenswrapper[4772]: I1122 11:03:57.691149 4772 generic.go:334] "Generic (PLEG): container finished" podID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerID="4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df" exitCode=0 Nov 22 11:03:57 crc kubenswrapper[4772]: I1122 11:03:57.691207 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kh47g" event={"ID":"3c4e5c22-7201-4d38-9562-9775f618f3e1","Type":"ContainerDied","Data":"4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df"} Nov 22 11:03:57 crc kubenswrapper[4772]: I1122 11:03:57.691437 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kh47g" event={"ID":"3c4e5c22-7201-4d38-9562-9775f618f3e1","Type":"ContainerStarted","Data":"0190afb37cf39e2ecd38e21874ef73250c7f065f95640f1aa15f19f9d2ea5f49"} Nov 22 11:03:58 crc kubenswrapper[4772]: I1122 11:03:58.704334 4772 generic.go:334] "Generic (PLEG): container finished" podID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerID="2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632" exitCode=0 Nov 22 11:03:58 crc kubenswrapper[4772]: I1122 11:03:58.704439 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kh47g" event={"ID":"3c4e5c22-7201-4d38-9562-9775f618f3e1","Type":"ContainerDied","Data":"2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632"} Nov 22 11:03:59 crc kubenswrapper[4772]: I1122 11:03:59.719096 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kh47g" event={"ID":"3c4e5c22-7201-4d38-9562-9775f618f3e1","Type":"ContainerStarted","Data":"c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164"} Nov 22 11:04:01 crc kubenswrapper[4772]: I1122 11:04:01.532689 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:04:01 crc kubenswrapper[4772]: I1122 11:04:01.533533 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:04:05 crc kubenswrapper[4772]: I1122 11:04:05.955243 4772 scope.go:117] "RemoveContainer" containerID="ccf4d7895ba000a2440b35a71263cfe3dbaecea5399c4efbf9b39db555947784" Nov 22 11:04:05 crc kubenswrapper[4772]: I1122 11:04:05.975709 4772 scope.go:117] "RemoveContainer" containerID="016163b669ebde3aeefc4073aae297eb488d84964eb236594d2492e583139946" Nov 22 11:04:05 crc kubenswrapper[4772]: I1122 11:04:05.997250 4772 scope.go:117] "RemoveContainer" containerID="c85d9df2828594224901e788e87c8476ae3cdb0ddfb531bc6c92887e8f178a94" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.019735 4772 scope.go:117] "RemoveContainer" containerID="2d37779b5504f0db6d4ca7ce06d1ae18228fa452bc3e7a1ebaf2109ab95e3d37" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.038730 4772 scope.go:117] "RemoveContainer" containerID="98cde3628a695695947f910d4b7bcf78b5a6744c0635767c30d7327e627bd98b" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.065274 4772 scope.go:117] "RemoveContainer" containerID="98e7f008c22226185b9fc4da3d836b571692bf43deee9c747a6c68730a3601a5" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.087909 4772 scope.go:117] "RemoveContainer" containerID="a439d4617a333c8788694a3ee6b6ab83b29b0e870445341c5c8b15c85d68a363" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.110369 4772 scope.go:117] "RemoveContainer" containerID="615036dea9b9e2690ff9781db1af8c7a6e8ede28c160b447b036e61977da8a12" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.126429 4772 scope.go:117] "RemoveContainer" containerID="5a27c5f251697b251ad3b6e3e178de4fa4415aca1c15a4e7c11f74e3fb032c2d" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.932314 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.932664 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:04:06 crc kubenswrapper[4772]: I1122 11:04:06.980072 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:04:07 crc kubenswrapper[4772]: I1122 11:04:07.002824 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kh47g" podStartSLOduration=9.380656513 podStartE2EDuration="11.002804345s" podCreationTimestamp="2025-11-22 11:03:56 +0000 UTC" firstStartedPulling="2025-11-22 11:03:57.69269855 +0000 UTC m=+1557.932143044" lastFinishedPulling="2025-11-22 11:03:59.314846392 +0000 UTC m=+1559.554290876" observedRunningTime="2025-11-22 11:03:59.755829842 +0000 UTC m=+1559.995274336" watchObservedRunningTime="2025-11-22 11:04:07.002804345 +0000 UTC m=+1567.242248839" Nov 22 11:04:07 crc kubenswrapper[4772]: I1122 11:04:07.838166 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:04:07 crc kubenswrapper[4772]: I1122 11:04:07.881517 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kh47g"] Nov 22 11:04:09 crc kubenswrapper[4772]: I1122 11:04:09.805483 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kh47g" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="registry-server" containerID="cri-o://c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164" gracePeriod=2 Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.213291 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.220716 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-catalog-content\") pod \"3c4e5c22-7201-4d38-9562-9775f618f3e1\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.220886 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-utilities\") pod \"3c4e5c22-7201-4d38-9562-9775f618f3e1\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.220948 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp4rx\" (UniqueName: \"kubernetes.io/projected/3c4e5c22-7201-4d38-9562-9775f618f3e1-kube-api-access-jp4rx\") pod \"3c4e5c22-7201-4d38-9562-9775f618f3e1\" (UID: \"3c4e5c22-7201-4d38-9562-9775f618f3e1\") " Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.221673 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-utilities" (OuterVolumeSpecName: "utilities") pod "3c4e5c22-7201-4d38-9562-9775f618f3e1" (UID: "3c4e5c22-7201-4d38-9562-9775f618f3e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.226239 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4e5c22-7201-4d38-9562-9775f618f3e1-kube-api-access-jp4rx" (OuterVolumeSpecName: "kube-api-access-jp4rx") pod "3c4e5c22-7201-4d38-9562-9775f618f3e1" (UID: "3c4e5c22-7201-4d38-9562-9775f618f3e1"). InnerVolumeSpecName "kube-api-access-jp4rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.272540 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c4e5c22-7201-4d38-9562-9775f618f3e1" (UID: "3c4e5c22-7201-4d38-9562-9775f618f3e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.322382 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.322418 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp4rx\" (UniqueName: \"kubernetes.io/projected/3c4e5c22-7201-4d38-9562-9775f618f3e1-kube-api-access-jp4rx\") on node \"crc\" DevicePath \"\"" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.322429 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4e5c22-7201-4d38-9562-9775f618f3e1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.815327 4772 generic.go:334] "Generic (PLEG): container finished" podID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerID="c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164" exitCode=0 Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.815370 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kh47g" event={"ID":"3c4e5c22-7201-4d38-9562-9775f618f3e1","Type":"ContainerDied","Data":"c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164"} Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.815379 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kh47g" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.815397 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kh47g" event={"ID":"3c4e5c22-7201-4d38-9562-9775f618f3e1","Type":"ContainerDied","Data":"0190afb37cf39e2ecd38e21874ef73250c7f065f95640f1aa15f19f9d2ea5f49"} Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.815414 4772 scope.go:117] "RemoveContainer" containerID="c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.837926 4772 scope.go:117] "RemoveContainer" containerID="2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.853849 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kh47g"] Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.861487 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kh47g"] Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.871147 4772 scope.go:117] "RemoveContainer" containerID="4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.922290 4772 scope.go:117] "RemoveContainer" containerID="c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164" Nov 22 11:04:10 crc kubenswrapper[4772]: E1122 11:04:10.923179 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164\": container with ID starting with c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164 not found: ID does not exist" containerID="c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.923206 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164"} err="failed to get container status \"c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164\": rpc error: code = NotFound desc = could not find container \"c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164\": container with ID starting with c7043c6d29a62427892f07883e52590d4e24043c1b7ae77f9de53e20c344a164 not found: ID does not exist" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.923229 4772 scope.go:117] "RemoveContainer" containerID="2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632" Nov 22 11:04:10 crc kubenswrapper[4772]: E1122 11:04:10.923516 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632\": container with ID starting with 2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632 not found: ID does not exist" containerID="2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.923561 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632"} err="failed to get container status \"2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632\": rpc error: code = NotFound desc = could not find container \"2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632\": container with ID starting with 2f1d8f88a76152be424204a22d80f92b332ef64ca0625b6212a3f10f0c98f632 not found: ID does not exist" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.923592 4772 scope.go:117] "RemoveContainer" containerID="4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df" Nov 22 11:04:10 crc kubenswrapper[4772]: E1122 11:04:10.923849 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df\": container with ID starting with 4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df not found: ID does not exist" containerID="4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df" Nov 22 11:04:10 crc kubenswrapper[4772]: I1122 11:04:10.923881 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df"} err="failed to get container status \"4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df\": rpc error: code = NotFound desc = could not find container \"4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df\": container with ID starting with 4055a8aebc911b7f49224d4e0db769f0d70b45c4dd361abfa41f5ac40ea029df not found: ID does not exist" Nov 22 11:04:11 crc kubenswrapper[4772]: I1122 11:04:11.421754 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" path="/var/lib/kubelet/pods/3c4e5c22-7201-4d38-9562-9775f618f3e1/volumes" Nov 22 11:04:31 crc kubenswrapper[4772]: I1122 11:04:31.533316 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:04:31 crc kubenswrapper[4772]: I1122 11:04:31.533947 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:05:01 crc kubenswrapper[4772]: I1122 11:05:01.533062 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:05:01 crc kubenswrapper[4772]: I1122 11:05:01.533535 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:05:01 crc kubenswrapper[4772]: I1122 11:05:01.533584 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:05:01 crc kubenswrapper[4772]: I1122 11:05:01.534186 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:05:01 crc kubenswrapper[4772]: I1122 11:05:01.534244 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" gracePeriod=600 Nov 22 11:05:01 crc kubenswrapper[4772]: E1122 11:05:01.675132 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:05:02 crc kubenswrapper[4772]: I1122 11:05:02.293343 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" exitCode=0 Nov 22 11:05:02 crc kubenswrapper[4772]: I1122 11:05:02.293392 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709"} Nov 22 11:05:02 crc kubenswrapper[4772]: I1122 11:05:02.293427 4772 scope.go:117] "RemoveContainer" containerID="95c963c954cabecad461116172cc9ea88ff81fed386024819164050fa8d713ce" Nov 22 11:05:02 crc kubenswrapper[4772]: I1122 11:05:02.294006 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:05:02 crc kubenswrapper[4772]: E1122 11:05:02.294307 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.357077 4772 scope.go:117] "RemoveContainer" containerID="3c4482e8b22e1ee45e4019fe7ed707b6cf62746e3b8a3874e155bfa807d1bfbe" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.380524 4772 scope.go:117] "RemoveContainer" containerID="56a525a9356c41405cf6232508a4af9e3b589cb3a12221c13f572a5936890d76" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.402280 4772 scope.go:117] "RemoveContainer" containerID="cfe11706a0850a328cd5f9163468f6f13ff412c16e00bdd5d4c3a82bca60e5d7" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.428140 4772 scope.go:117] "RemoveContainer" containerID="317d3b7bdcfc600aafe9a478674845c1d53093f09801f5b8a951fd84549db2a1" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.449007 4772 scope.go:117] "RemoveContainer" containerID="497163a333b7d91b1c734f1d6710d16007dc40d4076faa300e1f8ea20b04136f" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.496370 4772 scope.go:117] "RemoveContainer" containerID="909a2fa4b2deee761e0eb8564a7f465913c706c02a0b3226886d10405cca066b" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.516101 4772 scope.go:117] "RemoveContainer" containerID="07c2563361cad796b9ee4cc769b13a47e6b055bb75f24278b879c0e2e480714e" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.550814 4772 scope.go:117] "RemoveContainer" containerID="c1d39690183ac7da3f6bb73a2b6023178bcc32f7a86f717e2c779a9bc9ab7f54" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.571083 4772 scope.go:117] "RemoveContainer" containerID="b7f72ba6b3c20b7717ac1f766f0474861db16d73d4738ef463c67b77caf605a8" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.601990 4772 scope.go:117] "RemoveContainer" containerID="fd5d4abc41639900eb146687503de734a02b0f4b820d1f2e011edaa75a39ea55" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.627694 4772 scope.go:117] "RemoveContainer" containerID="308573355d3c23e3680c3cf9e647dd107983fe8aae70c703e63d2815a1d712b3" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.659644 4772 scope.go:117] "RemoveContainer" containerID="78a2384ff1affb6057d0b82a09784a3ce79ec68a40fd733f4edb45b280fa6098" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.677322 4772 scope.go:117] "RemoveContainer" containerID="65405fae45c3265128be3caa906524cdbe420cae719d3d586309d72fac170304" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.715873 4772 scope.go:117] "RemoveContainer" containerID="cdc8584e7921c9e3e8e90db22f83100d41cfbf8055a0f4ffb89bf16b487bb3f9" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.731973 4772 scope.go:117] "RemoveContainer" containerID="9918d8c3fe9277db5faff872fe942b44186f5d9bd5dda4e9135f2737495ab101" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.766111 4772 scope.go:117] "RemoveContainer" containerID="eac157ae6a2d31f928615483a5dac165500ea2cfda55fe2b11467adfdab1e36b" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.795565 4772 scope.go:117] "RemoveContainer" containerID="c6c1399307d1d09aaa50d08905f62ddc620cbd59c21efd5f9dba60c8c7eef11e" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.816971 4772 scope.go:117] "RemoveContainer" containerID="6d2c4827d4cb49d5883df31ba20e437493bf839f2d63f4c0a3fdbedc0e23ec2c" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.836773 4772 scope.go:117] "RemoveContainer" containerID="ddeb1d6df01dfd0c17fe899a1819de93c2d30aa66d19616fedb322be0b74e60d" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.866637 4772 scope.go:117] "RemoveContainer" containerID="eb40739d15b0abd59f1850c3167aabb90110e5977c58733f572575bdc3e9623f" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.906748 4772 scope.go:117] "RemoveContainer" containerID="bf3d996a1f9a43837e5f7fe9fc44fb298b29ccf7aa7545bca094b47b3b199409" Nov 22 11:05:06 crc kubenswrapper[4772]: I1122 11:05:06.930362 4772 scope.go:117] "RemoveContainer" containerID="56a8b954082c6812db97a25ba9fd58695f46ad8ff64c97e92509686b817d5837" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.173030 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f2hxp"] Nov 22 11:05:11 crc kubenswrapper[4772]: E1122 11:05:11.174160 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="registry-server" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.174181 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="registry-server" Nov 22 11:05:11 crc kubenswrapper[4772]: E1122 11:05:11.174200 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="extract-utilities" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.174228 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="extract-utilities" Nov 22 11:05:11 crc kubenswrapper[4772]: E1122 11:05:11.174267 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="extract-content" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.174275 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="extract-content" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.174480 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4e5c22-7201-4d38-9562-9775f618f3e1" containerName="registry-server" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.175868 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.177591 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2hxp"] Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.270667 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smn72\" (UniqueName: \"kubernetes.io/projected/6853c19a-18f1-46ff-9bf2-539a1ce389e5-kube-api-access-smn72\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.270845 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-utilities\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.270928 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-catalog-content\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.379515 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smn72\" (UniqueName: \"kubernetes.io/projected/6853c19a-18f1-46ff-9bf2-539a1ce389e5-kube-api-access-smn72\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.379572 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-utilities\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.379611 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-catalog-content\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.380164 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-catalog-content\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.380195 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-utilities\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.404129 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smn72\" (UniqueName: \"kubernetes.io/projected/6853c19a-18f1-46ff-9bf2-539a1ce389e5-kube-api-access-smn72\") pod \"redhat-marketplace-f2hxp\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.499545 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:11 crc kubenswrapper[4772]: I1122 11:05:11.921314 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2hxp"] Nov 22 11:05:12 crc kubenswrapper[4772]: I1122 11:05:12.395923 4772 generic.go:334] "Generic (PLEG): container finished" podID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerID="8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e" exitCode=0 Nov 22 11:05:12 crc kubenswrapper[4772]: I1122 11:05:12.395967 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2hxp" event={"ID":"6853c19a-18f1-46ff-9bf2-539a1ce389e5","Type":"ContainerDied","Data":"8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e"} Nov 22 11:05:12 crc kubenswrapper[4772]: I1122 11:05:12.396325 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2hxp" event={"ID":"6853c19a-18f1-46ff-9bf2-539a1ce389e5","Type":"ContainerStarted","Data":"0abf0574b2b7c82effd55a4d7578438ea3dee9749fc54dee6e1988d29e4580f1"} Nov 22 11:05:13 crc kubenswrapper[4772]: I1122 11:05:13.407824 4772 generic.go:334] "Generic (PLEG): container finished" podID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerID="db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7" exitCode=0 Nov 22 11:05:13 crc kubenswrapper[4772]: I1122 11:05:13.407868 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2hxp" event={"ID":"6853c19a-18f1-46ff-9bf2-539a1ce389e5","Type":"ContainerDied","Data":"db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7"} Nov 22 11:05:14 crc kubenswrapper[4772]: I1122 11:05:14.418281 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2hxp" event={"ID":"6853c19a-18f1-46ff-9bf2-539a1ce389e5","Type":"ContainerStarted","Data":"651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56"} Nov 22 11:05:14 crc kubenswrapper[4772]: I1122 11:05:14.440072 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f2hxp" podStartSLOduration=1.9324648770000001 podStartE2EDuration="3.440029249s" podCreationTimestamp="2025-11-22 11:05:11 +0000 UTC" firstStartedPulling="2025-11-22 11:05:12.397431464 +0000 UTC m=+1632.636875958" lastFinishedPulling="2025-11-22 11:05:13.904995836 +0000 UTC m=+1634.144440330" observedRunningTime="2025-11-22 11:05:14.433534802 +0000 UTC m=+1634.672979296" watchObservedRunningTime="2025-11-22 11:05:14.440029249 +0000 UTC m=+1634.679473743" Nov 22 11:05:17 crc kubenswrapper[4772]: I1122 11:05:17.413363 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:05:17 crc kubenswrapper[4772]: E1122 11:05:17.413853 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:05:21 crc kubenswrapper[4772]: I1122 11:05:21.500820 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:21 crc kubenswrapper[4772]: I1122 11:05:21.502108 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:21 crc kubenswrapper[4772]: I1122 11:05:21.565284 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:22 crc kubenswrapper[4772]: I1122 11:05:22.515938 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:22 crc kubenswrapper[4772]: I1122 11:05:22.559495 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2hxp"] Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.490860 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f2hxp" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="registry-server" containerID="cri-o://651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56" gracePeriod=2 Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.846178 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.973489 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-utilities\") pod \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.973617 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smn72\" (UniqueName: \"kubernetes.io/projected/6853c19a-18f1-46ff-9bf2-539a1ce389e5-kube-api-access-smn72\") pod \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.973652 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-catalog-content\") pod \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\" (UID: \"6853c19a-18f1-46ff-9bf2-539a1ce389e5\") " Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.974550 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-utilities" (OuterVolumeSpecName: "utilities") pod "6853c19a-18f1-46ff-9bf2-539a1ce389e5" (UID: "6853c19a-18f1-46ff-9bf2-539a1ce389e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.981696 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6853c19a-18f1-46ff-9bf2-539a1ce389e5-kube-api-access-smn72" (OuterVolumeSpecName: "kube-api-access-smn72") pod "6853c19a-18f1-46ff-9bf2-539a1ce389e5" (UID: "6853c19a-18f1-46ff-9bf2-539a1ce389e5"). InnerVolumeSpecName "kube-api-access-smn72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:05:24 crc kubenswrapper[4772]: I1122 11:05:24.992801 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6853c19a-18f1-46ff-9bf2-539a1ce389e5" (UID: "6853c19a-18f1-46ff-9bf2-539a1ce389e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.075857 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.075886 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smn72\" (UniqueName: \"kubernetes.io/projected/6853c19a-18f1-46ff-9bf2-539a1ce389e5-kube-api-access-smn72\") on node \"crc\" DevicePath \"\"" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.075898 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853c19a-18f1-46ff-9bf2-539a1ce389e5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.498734 4772 generic.go:334] "Generic (PLEG): container finished" podID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerID="651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56" exitCode=0 Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.498777 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2hxp" event={"ID":"6853c19a-18f1-46ff-9bf2-539a1ce389e5","Type":"ContainerDied","Data":"651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56"} Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.498804 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2hxp" event={"ID":"6853c19a-18f1-46ff-9bf2-539a1ce389e5","Type":"ContainerDied","Data":"0abf0574b2b7c82effd55a4d7578438ea3dee9749fc54dee6e1988d29e4580f1"} Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.498821 4772 scope.go:117] "RemoveContainer" containerID="651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.498929 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2hxp" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.518907 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2hxp"] Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.523494 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2hxp"] Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.527491 4772 scope.go:117] "RemoveContainer" containerID="db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.545893 4772 scope.go:117] "RemoveContainer" containerID="8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.571153 4772 scope.go:117] "RemoveContainer" containerID="651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56" Nov 22 11:05:25 crc kubenswrapper[4772]: E1122 11:05:25.571617 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56\": container with ID starting with 651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56 not found: ID does not exist" containerID="651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.571655 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56"} err="failed to get container status \"651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56\": rpc error: code = NotFound desc = could not find container \"651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56\": container with ID starting with 651cfe5bed217e71960926fc5d3d99c43e9d12da1f0409b1eeef3f651840ee56 not found: ID does not exist" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.571714 4772 scope.go:117] "RemoveContainer" containerID="db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7" Nov 22 11:05:25 crc kubenswrapper[4772]: E1122 11:05:25.572013 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7\": container with ID starting with db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7 not found: ID does not exist" containerID="db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.572040 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7"} err="failed to get container status \"db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7\": rpc error: code = NotFound desc = could not find container \"db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7\": container with ID starting with db7f1befa91c4b1a5646b0ea521484db7799ca03952e4a09de96aeef460c9ac7 not found: ID does not exist" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.572073 4772 scope.go:117] "RemoveContainer" containerID="8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e" Nov 22 11:05:25 crc kubenswrapper[4772]: E1122 11:05:25.572327 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e\": container with ID starting with 8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e not found: ID does not exist" containerID="8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e" Nov 22 11:05:25 crc kubenswrapper[4772]: I1122 11:05:25.572352 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e"} err="failed to get container status \"8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e\": rpc error: code = NotFound desc = could not find container \"8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e\": container with ID starting with 8b3d1ba2c59bbcf2494e31439452df16275ec0bd36df1eb6f8b9799080ea5f4e not found: ID does not exist" Nov 22 11:05:27 crc kubenswrapper[4772]: I1122 11:05:27.422343 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" path="/var/lib/kubelet/pods/6853c19a-18f1-46ff-9bf2-539a1ce389e5/volumes" Nov 22 11:05:32 crc kubenswrapper[4772]: I1122 11:05:32.413135 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:05:32 crc kubenswrapper[4772]: E1122 11:05:32.413589 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:05:47 crc kubenswrapper[4772]: I1122 11:05:47.414224 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:05:47 crc kubenswrapper[4772]: E1122 11:05:47.415146 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:06:01 crc kubenswrapper[4772]: I1122 11:06:01.413694 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:06:01 crc kubenswrapper[4772]: E1122 11:06:01.414428 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.283090 4772 scope.go:117] "RemoveContainer" containerID="c3ff3bf5f8075eb82c99bb11cb292366102815d9d517ae626765713d727efaff" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.312999 4772 scope.go:117] "RemoveContainer" containerID="deb156a613d4b361e96cb60957d663ef36ea4eb59d8168309e1f3c8cbbf8914f" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.329710 4772 scope.go:117] "RemoveContainer" containerID="844e023e7c3fccc854525c2a694623fa1a3482bbdd36a977a83ba8eb6cf3ab4b" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.348129 4772 scope.go:117] "RemoveContainer" containerID="e4ca72a284735f20e63c4ba7e6d7fe626b29ac206adfeb65a675df331a59cb14" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.365850 4772 scope.go:117] "RemoveContainer" containerID="b95964ad218161628ba9d6df7f28f6d82327565a7c20500e4082e1b8d7b0c9c3" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.384123 4772 scope.go:117] "RemoveContainer" containerID="e40f637f7b43ff915a3b153426def590c2d29d02ddeac886a688e0d9bf7a29a8" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.403346 4772 scope.go:117] "RemoveContainer" containerID="5f3fb2bca327167ec8cd3bb459744d887ba97a57e65c5d2bd3152cd9834ea040" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.443170 4772 scope.go:117] "RemoveContainer" containerID="41caed95f9f668a055e34288ae91bcce5a6f3ea58f05250f45efb84f1f1c0fbf" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.462671 4772 scope.go:117] "RemoveContainer" containerID="c8156d4d90894d7e03b812adef952ac712e8c034874975758156306fbf6d2972" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.482936 4772 scope.go:117] "RemoveContainer" containerID="791a1dda016da276f7e60912d835e500d93685bd5cc31d54d2b396d39fcc8af1" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.498502 4772 scope.go:117] "RemoveContainer" containerID="cfbc7ad9e142e1e14a3a3e0b9cba18bd1b43f25e9b244879f3cd28edcebc8de2" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.525557 4772 scope.go:117] "RemoveContainer" containerID="22e2f7cca48f17c69e56813ca2b405c534cda48058fd9a577adcf8d1882eb7e1" Nov 22 11:06:07 crc kubenswrapper[4772]: I1122 11:06:07.572652 4772 scope.go:117] "RemoveContainer" containerID="f465d380c2327b6cd250787f627fb5fa4acf31137a261406640758672be2bd67" Nov 22 11:06:13 crc kubenswrapper[4772]: I1122 11:06:13.413842 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:06:13 crc kubenswrapper[4772]: E1122 11:06:13.414641 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:06:28 crc kubenswrapper[4772]: I1122 11:06:28.413451 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:06:28 crc kubenswrapper[4772]: E1122 11:06:28.414205 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:06:40 crc kubenswrapper[4772]: I1122 11:06:40.413909 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:06:40 crc kubenswrapper[4772]: E1122 11:06:40.414782 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:06:53 crc kubenswrapper[4772]: I1122 11:06:53.414082 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:06:53 crc kubenswrapper[4772]: E1122 11:06:53.414869 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.729823 4772 scope.go:117] "RemoveContainer" containerID="b5adee60fbbe04c2b0f6e677dd915852ef48874363c5ddda10829f388b38decb" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.774979 4772 scope.go:117] "RemoveContainer" containerID="5d736ec35ae6a0500e6d88c0ff90627bd4515397189cf9c8910f73ee956ed76f" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.795801 4772 scope.go:117] "RemoveContainer" containerID="c79cb770e1e50bba1ffd90d856560b25af0b738c945cfa34cbf164d07bd32f5a" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.858253 4772 scope.go:117] "RemoveContainer" containerID="a06360ee3022a654f156c3386f22cd5fd488251afc8543f8c37cbc65fc693984" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.889099 4772 scope.go:117] "RemoveContainer" containerID="89f3af719d3b34aa755006c4c157b86e9e231adc44922aa49a262366c3fbab3e" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.910899 4772 scope.go:117] "RemoveContainer" containerID="94c9532e47a3e8f2deba93d357f982767f3bc9fd612be2d3ed8cd1f182488992" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.933013 4772 scope.go:117] "RemoveContainer" containerID="dd542af28bce5c278e708a047b0757d9812c5e11e9dc0dff83889ad014c4b497" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.962336 4772 scope.go:117] "RemoveContainer" containerID="59510dd7b9579831eb08695d39ab17e9efd8ec1346988dbb2e6af8437f7ad097" Nov 22 11:07:07 crc kubenswrapper[4772]: I1122 11:07:07.984536 4772 scope.go:117] "RemoveContainer" containerID="9711b2f630abd82d5d414ec59de5c0a41437bd4931a17976d02655285045660b" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.013265 4772 scope.go:117] "RemoveContainer" containerID="500a054f9f59bed80e21a830df2e4586802b8b2159197cb99902d460e8a2cdb9" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.051370 4772 scope.go:117] "RemoveContainer" containerID="d2ff64d96dfba7abbcacbdddbc68b2ab55e205bcc9422fdf3d5388dd6cf5273f" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.076715 4772 scope.go:117] "RemoveContainer" containerID="4471a034f975c2eb8a95db8d1c456f43dd6aee03a3d67c268c8b29fe4e53ac0a" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.101469 4772 scope.go:117] "RemoveContainer" containerID="4f80aca9ab925ac5f7c357f391fb695a8032c9580d4de9e838c68a35fdefcdc3" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.150064 4772 scope.go:117] "RemoveContainer" containerID="a05e82ad97693943b66d110b04e16e64eff2aeb42ec422477aa310a4d5d06e23" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.169878 4772 scope.go:117] "RemoveContainer" containerID="508aa44be1af6ce7429f5cfe8151bfa33738cc58c627ec663c79b706c344ddb4" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.191466 4772 scope.go:117] "RemoveContainer" containerID="a687188010e7d1b6b6e71ce02eb4abc2bad75aaad585c817273d9a77d8fbf014" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.214988 4772 scope.go:117] "RemoveContainer" containerID="9210427ecc8309d2dac4e2a1e4343641effc83bde83c8deb3bc9b80d64ac72cf" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.246957 4772 scope.go:117] "RemoveContainer" containerID="f2df47654803d93eba038dcb4866e8ad0d2e7d308fb39560cb0091e112aadb72" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.267264 4772 scope.go:117] "RemoveContainer" containerID="88ecd0459f0ac9488f0cd3eb8c402462803c773cf6ef7940b9aa2db2abf09dea" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.283085 4772 scope.go:117] "RemoveContainer" containerID="dfd79733bb340ac1878b2e319236d06e3fb8878f7900376f4a7fb9aa84b8711a" Nov 22 11:07:08 crc kubenswrapper[4772]: I1122 11:07:08.413696 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:07:08 crc kubenswrapper[4772]: E1122 11:07:08.413988 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:07:23 crc kubenswrapper[4772]: I1122 11:07:23.413395 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:07:23 crc kubenswrapper[4772]: E1122 11:07:23.415642 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:07:38 crc kubenswrapper[4772]: I1122 11:07:38.415388 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:07:38 crc kubenswrapper[4772]: E1122 11:07:38.416774 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:07:51 crc kubenswrapper[4772]: I1122 11:07:51.417623 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:07:51 crc kubenswrapper[4772]: E1122 11:07:51.418413 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:08:06 crc kubenswrapper[4772]: I1122 11:08:06.413918 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:08:06 crc kubenswrapper[4772]: E1122 11:08:06.415354 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.525714 4772 scope.go:117] "RemoveContainer" containerID="38b682e02083c87881ca0840c5c8512286c619c93d70f45cc0f97311ece2d602" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.547206 4772 scope.go:117] "RemoveContainer" containerID="1deddd5e209d6dd7109a121b569853e5e7e08e0d376b5e08b2b32dcb070598a2" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.581705 4772 scope.go:117] "RemoveContainer" containerID="06728879ed3c7fff6c20631b75c9e7056b952121959c9a5bff71c4d710f58f9f" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.615904 4772 scope.go:117] "RemoveContainer" containerID="0b9ac1c20deb88561ffa4d4aa22d3d7b705dfba4a955494934d74483e7d263d6" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.633765 4772 scope.go:117] "RemoveContainer" containerID="75123019921bf278dc20f04bfe0c16e7ca301815e19432fc55e95e02d9391c0e" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.649153 4772 scope.go:117] "RemoveContainer" containerID="08c0c6e64c972cfd07e310e4abfe3d9a7361c0e9c7848ec91a7d29025e8bfaf9" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.665175 4772 scope.go:117] "RemoveContainer" containerID="c427c59ce5190c28d69f76cce2242b1dd0afaeda292d697cff4fa07ba14a6523" Nov 22 11:08:08 crc kubenswrapper[4772]: I1122 11:08:08.691299 4772 scope.go:117] "RemoveContainer" containerID="8e13976df3c8001f16e78d82b5bbd4129ae0c1c72af18a09b2f2744b1feeb66a" Nov 22 11:08:18 crc kubenswrapper[4772]: I1122 11:08:18.414007 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:08:18 crc kubenswrapper[4772]: E1122 11:08:18.414739 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:08:30 crc kubenswrapper[4772]: I1122 11:08:30.413926 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:08:30 crc kubenswrapper[4772]: E1122 11:08:30.415208 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:08:42 crc kubenswrapper[4772]: I1122 11:08:42.413240 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:08:42 crc kubenswrapper[4772]: E1122 11:08:42.413957 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:08:56 crc kubenswrapper[4772]: I1122 11:08:56.413948 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:08:56 crc kubenswrapper[4772]: E1122 11:08:56.414868 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.788628 4772 scope.go:117] "RemoveContainer" containerID="5e0e21415f001e21917dd3a32819ab9cb5db83f0c62abce63b6c361f7db09c0d" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.807857 4772 scope.go:117] "RemoveContainer" containerID="ab42698c086434daa53f114239b03f9f09d42a90f526e9da455ca5fa44319783" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.823884 4772 scope.go:117] "RemoveContainer" containerID="a1427fd2be53da6cc2337bfb8c045b8983f1ac992ce43349fdf6205ac77e793b" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.854659 4772 scope.go:117] "RemoveContainer" containerID="10ca89615b0daee19e2ff35e6822a5cb0d8fad7528fc2fae6ffe6155cd72db74" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.872318 4772 scope.go:117] "RemoveContainer" containerID="241a8eaefd8f667894261b22a62dc20eac70a2ffa0b3309654a9b9bcc88514de" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.890683 4772 scope.go:117] "RemoveContainer" containerID="87ddd492904d37b209c29a1ac6b15eb1b0ea478fe5009f7ccf2abbf8a98a52e8" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.912784 4772 scope.go:117] "RemoveContainer" containerID="eb806120517e41fc28276787644b8c800bdb795e19ae53859d7279e00db21cb7" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.928389 4772 scope.go:117] "RemoveContainer" containerID="214c2acb33ea5a782af6be55ddcf02954762b130146c3e714c36840852bfafb4" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.943746 4772 scope.go:117] "RemoveContainer" containerID="e5c751528fea2ac722ee321494f6ac8ae1afd4e1ad69103eb66eda03840cc558" Nov 22 11:09:08 crc kubenswrapper[4772]: I1122 11:09:08.958612 4772 scope.go:117] "RemoveContainer" containerID="f9a0eee7dffa91a6e45f6416ea3a94139da8a50ce7e232bc83c7eb4c3723250c" Nov 22 11:09:11 crc kubenswrapper[4772]: I1122 11:09:11.418574 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:09:11 crc kubenswrapper[4772]: E1122 11:09:11.418840 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:09:26 crc kubenswrapper[4772]: I1122 11:09:26.412887 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:09:26 crc kubenswrapper[4772]: E1122 11:09:26.413602 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:09:41 crc kubenswrapper[4772]: I1122 11:09:41.418391 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:09:41 crc kubenswrapper[4772]: E1122 11:09:41.419883 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:09:54 crc kubenswrapper[4772]: I1122 11:09:54.414000 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:09:54 crc kubenswrapper[4772]: E1122 11:09:54.415127 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:10:08 crc kubenswrapper[4772]: I1122 11:10:08.413427 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:10:08 crc kubenswrapper[4772]: I1122 11:10:08.682573 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"5e4c663c90bf28427705b49fecfbb62fa2d6fccd4685ea4cb55fb6637ad86c41"} Nov 22 11:10:09 crc kubenswrapper[4772]: I1122 11:10:09.029893 4772 scope.go:117] "RemoveContainer" containerID="b77b969953fe9c8048b8ff277fe2329156f5363070399e9e09c0f7223bcb8d6e" Nov 22 11:10:09 crc kubenswrapper[4772]: I1122 11:10:09.068449 4772 scope.go:117] "RemoveContainer" containerID="9b4a372c6969edd5b4fcbcfc5857268afac7433a5a8b32510bbb740f15100717" Nov 22 11:10:09 crc kubenswrapper[4772]: I1122 11:10:09.085940 4772 scope.go:117] "RemoveContainer" containerID="e28b401e3c853589d7a264d1dc93faf87588bdab270334394035a96b16630d72" Nov 22 11:10:09 crc kubenswrapper[4772]: I1122 11:10:09.109653 4772 scope.go:117] "RemoveContainer" containerID="3d173607bb5dc318429a013351d1676c6d42fb4927a52f88588e3167331f4341" Nov 22 11:10:09 crc kubenswrapper[4772]: I1122 11:10:09.132374 4772 scope.go:117] "RemoveContainer" containerID="d2ca6179053090103083ab2df5c267a331e4c9cb421a6856a8ede33c2154df6c" Nov 22 11:10:09 crc kubenswrapper[4772]: I1122 11:10:09.154572 4772 scope.go:117] "RemoveContainer" containerID="fdf7bdb90ea53df33a248c12c99b072c760a9ac7d7af309e4e196c43accf6f02" Nov 22 11:10:09 crc kubenswrapper[4772]: I1122 11:10:09.179164 4772 scope.go:117] "RemoveContainer" containerID="13ecba734c345ef3d469fb45e441d0e233378d129f7ea9238f342cb8ddae536e" Nov 22 11:12:31 crc kubenswrapper[4772]: I1122 11:12:31.533258 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:12:31 crc kubenswrapper[4772]: I1122 11:12:31.534316 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.730178 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6x6r9"] Nov 22 11:12:57 crc kubenswrapper[4772]: E1122 11:12:57.730970 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="extract-content" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.730983 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="extract-content" Nov 22 11:12:57 crc kubenswrapper[4772]: E1122 11:12:57.730996 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="registry-server" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.731003 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="registry-server" Nov 22 11:12:57 crc kubenswrapper[4772]: E1122 11:12:57.731015 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="extract-utilities" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.731022 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="extract-utilities" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.731458 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6853c19a-18f1-46ff-9bf2-539a1ce389e5" containerName="registry-server" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.737457 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.743907 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x6r9"] Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.859023 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-catalog-content\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.859379 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-utilities\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.859463 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmzd\" (UniqueName: \"kubernetes.io/projected/aeb824a6-f5e7-4485-8509-c3f86a551276-kube-api-access-4wmzd\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.960500 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wmzd\" (UniqueName: \"kubernetes.io/projected/aeb824a6-f5e7-4485-8509-c3f86a551276-kube-api-access-4wmzd\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.960579 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-catalog-content\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.960606 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-utilities\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.961128 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-utilities\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.961276 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-catalog-content\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:57 crc kubenswrapper[4772]: I1122 11:12:57.980499 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wmzd\" (UniqueName: \"kubernetes.io/projected/aeb824a6-f5e7-4485-8509-c3f86a551276-kube-api-access-4wmzd\") pod \"community-operators-6x6r9\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:58 crc kubenswrapper[4772]: I1122 11:12:58.069292 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:12:58 crc kubenswrapper[4772]: I1122 11:12:58.569968 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x6r9"] Nov 22 11:12:58 crc kubenswrapper[4772]: I1122 11:12:58.871660 4772 generic.go:334] "Generic (PLEG): container finished" podID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerID="90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3" exitCode=0 Nov 22 11:12:58 crc kubenswrapper[4772]: I1122 11:12:58.871715 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x6r9" event={"ID":"aeb824a6-f5e7-4485-8509-c3f86a551276","Type":"ContainerDied","Data":"90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3"} Nov 22 11:12:58 crc kubenswrapper[4772]: I1122 11:12:58.871764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x6r9" event={"ID":"aeb824a6-f5e7-4485-8509-c3f86a551276","Type":"ContainerStarted","Data":"1678e873fde71a0f0f9e01a166021b2421ef82f8fa95d82bc87a926cd7d77829"} Nov 22 11:12:58 crc kubenswrapper[4772]: I1122 11:12:58.873838 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 11:12:59 crc kubenswrapper[4772]: I1122 11:12:59.886697 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x6r9" event={"ID":"aeb824a6-f5e7-4485-8509-c3f86a551276","Type":"ContainerStarted","Data":"7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a"} Nov 22 11:13:00 crc kubenswrapper[4772]: I1122 11:13:00.896536 4772 generic.go:334] "Generic (PLEG): container finished" podID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerID="7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a" exitCode=0 Nov 22 11:13:00 crc kubenswrapper[4772]: I1122 11:13:00.896637 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x6r9" event={"ID":"aeb824a6-f5e7-4485-8509-c3f86a551276","Type":"ContainerDied","Data":"7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a"} Nov 22 11:13:01 crc kubenswrapper[4772]: I1122 11:13:01.533323 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:13:01 crc kubenswrapper[4772]: I1122 11:13:01.533990 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:13:01 crc kubenswrapper[4772]: I1122 11:13:01.905131 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x6r9" event={"ID":"aeb824a6-f5e7-4485-8509-c3f86a551276","Type":"ContainerStarted","Data":"627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29"} Nov 22 11:13:01 crc kubenswrapper[4772]: I1122 11:13:01.931455 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6x6r9" podStartSLOduration=2.414618282 podStartE2EDuration="4.931434431s" podCreationTimestamp="2025-11-22 11:12:57 +0000 UTC" firstStartedPulling="2025-11-22 11:12:58.873589719 +0000 UTC m=+2099.113034213" lastFinishedPulling="2025-11-22 11:13:01.390405878 +0000 UTC m=+2101.629850362" observedRunningTime="2025-11-22 11:13:01.931091242 +0000 UTC m=+2102.170535746" watchObservedRunningTime="2025-11-22 11:13:01.931434431 +0000 UTC m=+2102.170878935" Nov 22 11:13:08 crc kubenswrapper[4772]: I1122 11:13:08.070210 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:13:08 crc kubenswrapper[4772]: I1122 11:13:08.070850 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:13:08 crc kubenswrapper[4772]: I1122 11:13:08.119034 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:13:09 crc kubenswrapper[4772]: I1122 11:13:08.999959 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:13:09 crc kubenswrapper[4772]: I1122 11:13:09.048238 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x6r9"] Nov 22 11:13:10 crc kubenswrapper[4772]: I1122 11:13:10.969085 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6x6r9" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="registry-server" containerID="cri-o://627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29" gracePeriod=2 Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.827077 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.952086 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-utilities\") pod \"aeb824a6-f5e7-4485-8509-c3f86a551276\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.952274 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wmzd\" (UniqueName: \"kubernetes.io/projected/aeb824a6-f5e7-4485-8509-c3f86a551276-kube-api-access-4wmzd\") pod \"aeb824a6-f5e7-4485-8509-c3f86a551276\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.952321 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-catalog-content\") pod \"aeb824a6-f5e7-4485-8509-c3f86a551276\" (UID: \"aeb824a6-f5e7-4485-8509-c3f86a551276\") " Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.953161 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-utilities" (OuterVolumeSpecName: "utilities") pod "aeb824a6-f5e7-4485-8509-c3f86a551276" (UID: "aeb824a6-f5e7-4485-8509-c3f86a551276"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.957156 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb824a6-f5e7-4485-8509-c3f86a551276-kube-api-access-4wmzd" (OuterVolumeSpecName: "kube-api-access-4wmzd") pod "aeb824a6-f5e7-4485-8509-c3f86a551276" (UID: "aeb824a6-f5e7-4485-8509-c3f86a551276"). InnerVolumeSpecName "kube-api-access-4wmzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.979384 4772 generic.go:334] "Generic (PLEG): container finished" podID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerID="627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29" exitCode=0 Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.979461 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x6r9" Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.979484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x6r9" event={"ID":"aeb824a6-f5e7-4485-8509-c3f86a551276","Type":"ContainerDied","Data":"627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29"} Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.979558 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x6r9" event={"ID":"aeb824a6-f5e7-4485-8509-c3f86a551276","Type":"ContainerDied","Data":"1678e873fde71a0f0f9e01a166021b2421ef82f8fa95d82bc87a926cd7d77829"} Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.979585 4772 scope.go:117] "RemoveContainer" containerID="627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29" Nov 22 11:13:11 crc kubenswrapper[4772]: I1122 11:13:11.997847 4772 scope.go:117] "RemoveContainer" containerID="7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.004722 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aeb824a6-f5e7-4485-8509-c3f86a551276" (UID: "aeb824a6-f5e7-4485-8509-c3f86a551276"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.013780 4772 scope.go:117] "RemoveContainer" containerID="90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.044381 4772 scope.go:117] "RemoveContainer" containerID="627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29" Nov 22 11:13:12 crc kubenswrapper[4772]: E1122 11:13:12.045204 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29\": container with ID starting with 627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29 not found: ID does not exist" containerID="627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.045291 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29"} err="failed to get container status \"627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29\": rpc error: code = NotFound desc = could not find container \"627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29\": container with ID starting with 627348a343890a5ca165aedb16e55069d9953ff6dcdae16e466a8e97c51b5d29 not found: ID does not exist" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.045348 4772 scope.go:117] "RemoveContainer" containerID="7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a" Nov 22 11:13:12 crc kubenswrapper[4772]: E1122 11:13:12.045932 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a\": container with ID starting with 7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a not found: ID does not exist" containerID="7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.045976 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a"} err="failed to get container status \"7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a\": rpc error: code = NotFound desc = could not find container \"7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a\": container with ID starting with 7e3c17fe2506ac421c640936a2b0a818b5a7f0883014942ab0483f74caedd64a not found: ID does not exist" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.046002 4772 scope.go:117] "RemoveContainer" containerID="90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3" Nov 22 11:13:12 crc kubenswrapper[4772]: E1122 11:13:12.046445 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3\": container with ID starting with 90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3 not found: ID does not exist" containerID="90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.046556 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3"} err="failed to get container status \"90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3\": rpc error: code = NotFound desc = could not find container \"90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3\": container with ID starting with 90536b8c35b79d99297bf34c903b71df0e8abf4db202c63225faaa58f3ab32c3 not found: ID does not exist" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.054330 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.054360 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wmzd\" (UniqueName: \"kubernetes.io/projected/aeb824a6-f5e7-4485-8509-c3f86a551276-kube-api-access-4wmzd\") on node \"crc\" DevicePath \"\"" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.054373 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb824a6-f5e7-4485-8509-c3f86a551276-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.311958 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x6r9"] Nov 22 11:13:12 crc kubenswrapper[4772]: I1122 11:13:12.318565 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6x6r9"] Nov 22 11:13:13 crc kubenswrapper[4772]: I1122 11:13:13.425392 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" path="/var/lib/kubelet/pods/aeb824a6-f5e7-4485-8509-c3f86a551276/volumes" Nov 22 11:13:31 crc kubenswrapper[4772]: I1122 11:13:31.533242 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:13:31 crc kubenswrapper[4772]: I1122 11:13:31.533787 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:13:31 crc kubenswrapper[4772]: I1122 11:13:31.534357 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:13:31 crc kubenswrapper[4772]: I1122 11:13:31.535024 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e4c663c90bf28427705b49fecfbb62fa2d6fccd4685ea4cb55fb6637ad86c41"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:13:31 crc kubenswrapper[4772]: I1122 11:13:31.535112 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://5e4c663c90bf28427705b49fecfbb62fa2d6fccd4685ea4cb55fb6637ad86c41" gracePeriod=600 Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.120765 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="5e4c663c90bf28427705b49fecfbb62fa2d6fccd4685ea4cb55fb6637ad86c41" exitCode=0 Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.120841 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"5e4c663c90bf28427705b49fecfbb62fa2d6fccd4685ea4cb55fb6637ad86c41"} Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.121606 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e"} Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.121658 4772 scope.go:117] "RemoveContainer" containerID="3976a411c521a8e3e125420aaff2223fb8a2c4167e8b50aefbc7378d6da33709" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.664592 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lw4hq"] Nov 22 11:13:32 crc kubenswrapper[4772]: E1122 11:13:32.665673 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="extract-utilities" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.665699 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="extract-utilities" Nov 22 11:13:32 crc kubenswrapper[4772]: E1122 11:13:32.665735 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="registry-server" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.665744 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="registry-server" Nov 22 11:13:32 crc kubenswrapper[4772]: E1122 11:13:32.665766 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="extract-content" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.665773 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="extract-content" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.668026 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb824a6-f5e7-4485-8509-c3f86a551276" containerName="registry-server" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.669571 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.670239 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lw4hq"] Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.852979 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxz78\" (UniqueName: \"kubernetes.io/projected/d3d2c628-04a5-4623-b386-4a4f4b750dca-kube-api-access-sxz78\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.853297 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-catalog-content\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.853417 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-utilities\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.954452 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxz78\" (UniqueName: \"kubernetes.io/projected/d3d2c628-04a5-4623-b386-4a4f4b750dca-kube-api-access-sxz78\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.954795 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-catalog-content\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.954942 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-utilities\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.955517 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-utilities\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.955602 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-catalog-content\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.975204 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxz78\" (UniqueName: \"kubernetes.io/projected/d3d2c628-04a5-4623-b386-4a4f4b750dca-kube-api-access-sxz78\") pod \"redhat-operators-lw4hq\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:32 crc kubenswrapper[4772]: I1122 11:13:32.985847 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:33 crc kubenswrapper[4772]: I1122 11:13:33.398420 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lw4hq"] Nov 22 11:13:34 crc kubenswrapper[4772]: I1122 11:13:34.158119 4772 generic.go:334] "Generic (PLEG): container finished" podID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerID="fcc8492f81bff94942e2f86e8eecfe7f90e5d350aca42092f7879d01615c007f" exitCode=0 Nov 22 11:13:34 crc kubenswrapper[4772]: I1122 11:13:34.158202 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw4hq" event={"ID":"d3d2c628-04a5-4623-b386-4a4f4b750dca","Type":"ContainerDied","Data":"fcc8492f81bff94942e2f86e8eecfe7f90e5d350aca42092f7879d01615c007f"} Nov 22 11:13:34 crc kubenswrapper[4772]: I1122 11:13:34.158430 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw4hq" event={"ID":"d3d2c628-04a5-4623-b386-4a4f4b750dca","Type":"ContainerStarted","Data":"4c3ac91296558684406fc33c6f199e99de877590af4963ff7efc693c43b059b7"} Nov 22 11:13:35 crc kubenswrapper[4772]: I1122 11:13:35.166858 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw4hq" event={"ID":"d3d2c628-04a5-4623-b386-4a4f4b750dca","Type":"ContainerStarted","Data":"71d33320c8609c9f1976e6e9efb77cee14ccbe00be22acb67a8dbe4518c8c5f5"} Nov 22 11:13:36 crc kubenswrapper[4772]: I1122 11:13:36.175860 4772 generic.go:334] "Generic (PLEG): container finished" podID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerID="71d33320c8609c9f1976e6e9efb77cee14ccbe00be22acb67a8dbe4518c8c5f5" exitCode=0 Nov 22 11:13:36 crc kubenswrapper[4772]: I1122 11:13:36.175938 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw4hq" event={"ID":"d3d2c628-04a5-4623-b386-4a4f4b750dca","Type":"ContainerDied","Data":"71d33320c8609c9f1976e6e9efb77cee14ccbe00be22acb67a8dbe4518c8c5f5"} Nov 22 11:13:37 crc kubenswrapper[4772]: I1122 11:13:37.185754 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw4hq" event={"ID":"d3d2c628-04a5-4623-b386-4a4f4b750dca","Type":"ContainerStarted","Data":"dc82fc364a50a59c69af4bece182bb12a70e148697ccd0653e1507eaa9a8725c"} Nov 22 11:13:37 crc kubenswrapper[4772]: I1122 11:13:37.203300 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lw4hq" podStartSLOduration=2.62462644 podStartE2EDuration="5.203279763s" podCreationTimestamp="2025-11-22 11:13:32 +0000 UTC" firstStartedPulling="2025-11-22 11:13:34.161247238 +0000 UTC m=+2134.400691732" lastFinishedPulling="2025-11-22 11:13:36.739900541 +0000 UTC m=+2136.979345055" observedRunningTime="2025-11-22 11:13:37.200993056 +0000 UTC m=+2137.440437550" watchObservedRunningTime="2025-11-22 11:13:37.203279763 +0000 UTC m=+2137.442724257" Nov 22 11:13:42 crc kubenswrapper[4772]: I1122 11:13:42.986120 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:42 crc kubenswrapper[4772]: I1122 11:13:42.986618 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:43 crc kubenswrapper[4772]: I1122 11:13:43.024061 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:43 crc kubenswrapper[4772]: I1122 11:13:43.261310 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:43 crc kubenswrapper[4772]: I1122 11:13:43.305705 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lw4hq"] Nov 22 11:13:45 crc kubenswrapper[4772]: I1122 11:13:45.240610 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lw4hq" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="registry-server" containerID="cri-o://dc82fc364a50a59c69af4bece182bb12a70e148697ccd0653e1507eaa9a8725c" gracePeriod=2 Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.262699 4772 generic.go:334] "Generic (PLEG): container finished" podID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerID="dc82fc364a50a59c69af4bece182bb12a70e148697ccd0653e1507eaa9a8725c" exitCode=0 Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.262781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw4hq" event={"ID":"d3d2c628-04a5-4623-b386-4a4f4b750dca","Type":"ContainerDied","Data":"dc82fc364a50a59c69af4bece182bb12a70e148697ccd0653e1507eaa9a8725c"} Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.573200 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.595327 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxz78\" (UniqueName: \"kubernetes.io/projected/d3d2c628-04a5-4623-b386-4a4f4b750dca-kube-api-access-sxz78\") pod \"d3d2c628-04a5-4623-b386-4a4f4b750dca\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.595391 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-utilities\") pod \"d3d2c628-04a5-4623-b386-4a4f4b750dca\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.595418 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-catalog-content\") pod \"d3d2c628-04a5-4623-b386-4a4f4b750dca\" (UID: \"d3d2c628-04a5-4623-b386-4a4f4b750dca\") " Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.597523 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-utilities" (OuterVolumeSpecName: "utilities") pod "d3d2c628-04a5-4623-b386-4a4f4b750dca" (UID: "d3d2c628-04a5-4623-b386-4a4f4b750dca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.601724 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d2c628-04a5-4623-b386-4a4f4b750dca-kube-api-access-sxz78" (OuterVolumeSpecName: "kube-api-access-sxz78") pod "d3d2c628-04a5-4623-b386-4a4f4b750dca" (UID: "d3d2c628-04a5-4623-b386-4a4f4b750dca"). InnerVolumeSpecName "kube-api-access-sxz78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.692764 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3d2c628-04a5-4623-b386-4a4f4b750dca" (UID: "d3d2c628-04a5-4623-b386-4a4f4b750dca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.696964 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxz78\" (UniqueName: \"kubernetes.io/projected/d3d2c628-04a5-4623-b386-4a4f4b750dca-kube-api-access-sxz78\") on node \"crc\" DevicePath \"\"" Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.697009 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:13:48 crc kubenswrapper[4772]: I1122 11:13:48.697023 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d2c628-04a5-4623-b386-4a4f4b750dca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.273794 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw4hq" event={"ID":"d3d2c628-04a5-4623-b386-4a4f4b750dca","Type":"ContainerDied","Data":"4c3ac91296558684406fc33c6f199e99de877590af4963ff7efc693c43b059b7"} Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.274145 4772 scope.go:117] "RemoveContainer" containerID="dc82fc364a50a59c69af4bece182bb12a70e148697ccd0653e1507eaa9a8725c" Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.273872 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw4hq" Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.291428 4772 scope.go:117] "RemoveContainer" containerID="71d33320c8609c9f1976e6e9efb77cee14ccbe00be22acb67a8dbe4518c8c5f5" Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.307523 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lw4hq"] Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.313003 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lw4hq"] Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.327544 4772 scope.go:117] "RemoveContainer" containerID="fcc8492f81bff94942e2f86e8eecfe7f90e5d350aca42092f7879d01615c007f" Nov 22 11:13:49 crc kubenswrapper[4772]: I1122 11:13:49.421763 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" path="/var/lib/kubelet/pods/d3d2c628-04a5-4623-b386-4a4f4b750dca/volumes" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.173311 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b69qp"] Nov 22 11:14:42 crc kubenswrapper[4772]: E1122 11:14:42.174299 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="extract-utilities" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.174312 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="extract-utilities" Nov 22 11:14:42 crc kubenswrapper[4772]: E1122 11:14:42.174324 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="registry-server" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.174330 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="registry-server" Nov 22 11:14:42 crc kubenswrapper[4772]: E1122 11:14:42.174352 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="extract-content" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.174358 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="extract-content" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.174500 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d2c628-04a5-4623-b386-4a4f4b750dca" containerName="registry-server" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.175508 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.188991 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-utilities\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.189281 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-catalog-content\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.189329 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbzj\" (UniqueName: \"kubernetes.io/projected/0b87e72c-5b30-4769-8b84-1e5876dbf75b-kube-api-access-crbzj\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.194077 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b69qp"] Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.290921 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-catalog-content\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.290965 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbzj\" (UniqueName: \"kubernetes.io/projected/0b87e72c-5b30-4769-8b84-1e5876dbf75b-kube-api-access-crbzj\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.291058 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-utilities\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.291650 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-catalog-content\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.291662 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-utilities\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.311868 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbzj\" (UniqueName: \"kubernetes.io/projected/0b87e72c-5b30-4769-8b84-1e5876dbf75b-kube-api-access-crbzj\") pod \"certified-operators-b69qp\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.512037 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:42 crc kubenswrapper[4772]: I1122 11:14:42.783460 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b69qp"] Nov 22 11:14:43 crc kubenswrapper[4772]: I1122 11:14:43.646891 4772 generic.go:334] "Generic (PLEG): container finished" podID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerID="27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177" exitCode=0 Nov 22 11:14:43 crc kubenswrapper[4772]: I1122 11:14:43.647064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b69qp" event={"ID":"0b87e72c-5b30-4769-8b84-1e5876dbf75b","Type":"ContainerDied","Data":"27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177"} Nov 22 11:14:43 crc kubenswrapper[4772]: I1122 11:14:43.647229 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b69qp" event={"ID":"0b87e72c-5b30-4769-8b84-1e5876dbf75b","Type":"ContainerStarted","Data":"07027bdfc310659a4b917adb5bfd103afdfc99d257f7e893a346c49916a3069b"} Nov 22 11:14:44 crc kubenswrapper[4772]: I1122 11:14:44.658184 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b69qp" event={"ID":"0b87e72c-5b30-4769-8b84-1e5876dbf75b","Type":"ContainerStarted","Data":"d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c"} Nov 22 11:14:45 crc kubenswrapper[4772]: I1122 11:14:45.665831 4772 generic.go:334] "Generic (PLEG): container finished" podID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerID="d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c" exitCode=0 Nov 22 11:14:45 crc kubenswrapper[4772]: I1122 11:14:45.665883 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b69qp" event={"ID":"0b87e72c-5b30-4769-8b84-1e5876dbf75b","Type":"ContainerDied","Data":"d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c"} Nov 22 11:14:46 crc kubenswrapper[4772]: I1122 11:14:46.675908 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b69qp" event={"ID":"0b87e72c-5b30-4769-8b84-1e5876dbf75b","Type":"ContainerStarted","Data":"140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e"} Nov 22 11:14:52 crc kubenswrapper[4772]: I1122 11:14:52.512844 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:52 crc kubenswrapper[4772]: I1122 11:14:52.513512 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:52 crc kubenswrapper[4772]: I1122 11:14:52.556252 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:52 crc kubenswrapper[4772]: I1122 11:14:52.575964 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b69qp" podStartSLOduration=8.09695915 podStartE2EDuration="10.575945766s" podCreationTimestamp="2025-11-22 11:14:42 +0000 UTC" firstStartedPulling="2025-11-22 11:14:43.653126335 +0000 UTC m=+2203.892570829" lastFinishedPulling="2025-11-22 11:14:46.132112931 +0000 UTC m=+2206.371557445" observedRunningTime="2025-11-22 11:14:46.697427368 +0000 UTC m=+2206.936871892" watchObservedRunningTime="2025-11-22 11:14:52.575945766 +0000 UTC m=+2212.815390260" Nov 22 11:14:52 crc kubenswrapper[4772]: I1122 11:14:52.764958 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:52 crc kubenswrapper[4772]: I1122 11:14:52.808989 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b69qp"] Nov 22 11:14:54 crc kubenswrapper[4772]: I1122 11:14:54.732650 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b69qp" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="registry-server" containerID="cri-o://140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e" gracePeriod=2 Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.657873 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.694120 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-catalog-content\") pod \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.694270 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-utilities\") pod \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.694499 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crbzj\" (UniqueName: \"kubernetes.io/projected/0b87e72c-5b30-4769-8b84-1e5876dbf75b-kube-api-access-crbzj\") pod \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\" (UID: \"0b87e72c-5b30-4769-8b84-1e5876dbf75b\") " Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.695594 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-utilities" (OuterVolumeSpecName: "utilities") pod "0b87e72c-5b30-4769-8b84-1e5876dbf75b" (UID: "0b87e72c-5b30-4769-8b84-1e5876dbf75b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.696116 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.699900 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b87e72c-5b30-4769-8b84-1e5876dbf75b-kube-api-access-crbzj" (OuterVolumeSpecName: "kube-api-access-crbzj") pod "0b87e72c-5b30-4769-8b84-1e5876dbf75b" (UID: "0b87e72c-5b30-4769-8b84-1e5876dbf75b"). InnerVolumeSpecName "kube-api-access-crbzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.741250 4772 generic.go:334] "Generic (PLEG): container finished" podID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerID="140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e" exitCode=0 Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.741286 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b69qp" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.741296 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b69qp" event={"ID":"0b87e72c-5b30-4769-8b84-1e5876dbf75b","Type":"ContainerDied","Data":"140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e"} Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.741327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b69qp" event={"ID":"0b87e72c-5b30-4769-8b84-1e5876dbf75b","Type":"ContainerDied","Data":"07027bdfc310659a4b917adb5bfd103afdfc99d257f7e893a346c49916a3069b"} Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.741345 4772 scope.go:117] "RemoveContainer" containerID="140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.760707 4772 scope.go:117] "RemoveContainer" containerID="d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.776590 4772 scope.go:117] "RemoveContainer" containerID="27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.797680 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crbzj\" (UniqueName: \"kubernetes.io/projected/0b87e72c-5b30-4769-8b84-1e5876dbf75b-kube-api-access-crbzj\") on node \"crc\" DevicePath \"\"" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.801124 4772 scope.go:117] "RemoveContainer" containerID="140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e" Nov 22 11:14:55 crc kubenswrapper[4772]: E1122 11:14:55.801583 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e\": container with ID starting with 140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e not found: ID does not exist" containerID="140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.801631 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e"} err="failed to get container status \"140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e\": rpc error: code = NotFound desc = could not find container \"140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e\": container with ID starting with 140ced5f08d1397b4ce33ac4bab9b79c3ef63753bdb250d38e9d9ea88bfc894e not found: ID does not exist" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.801659 4772 scope.go:117] "RemoveContainer" containerID="d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c" Nov 22 11:14:55 crc kubenswrapper[4772]: E1122 11:14:55.802089 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c\": container with ID starting with d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c not found: ID does not exist" containerID="d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.802124 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c"} err="failed to get container status \"d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c\": rpc error: code = NotFound desc = could not find container \"d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c\": container with ID starting with d63c043b1c6a4d33346f2af535b43cddaf92f0d6969ea5558910201f4511ff6c not found: ID does not exist" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.802167 4772 scope.go:117] "RemoveContainer" containerID="27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177" Nov 22 11:14:55 crc kubenswrapper[4772]: E1122 11:14:55.802441 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177\": container with ID starting with 27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177 not found: ID does not exist" containerID="27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177" Nov 22 11:14:55 crc kubenswrapper[4772]: I1122 11:14:55.802469 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177"} err="failed to get container status \"27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177\": rpc error: code = NotFound desc = could not find container \"27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177\": container with ID starting with 27d2ab6b61d7096649545f2f455f95bb2ff08f405701fae84b54d709bbdb2177 not found: ID does not exist" Nov 22 11:14:56 crc kubenswrapper[4772]: I1122 11:14:56.684654 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b87e72c-5b30-4769-8b84-1e5876dbf75b" (UID: "0b87e72c-5b30-4769-8b84-1e5876dbf75b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:14:56 crc kubenswrapper[4772]: I1122 11:14:56.710562 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b87e72c-5b30-4769-8b84-1e5876dbf75b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:14:56 crc kubenswrapper[4772]: I1122 11:14:56.972189 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b69qp"] Nov 22 11:14:56 crc kubenswrapper[4772]: I1122 11:14:56.976694 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b69qp"] Nov 22 11:14:57 crc kubenswrapper[4772]: I1122 11:14:57.426640 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" path="/var/lib/kubelet/pods/0b87e72c-5b30-4769-8b84-1e5876dbf75b/volumes" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.138068 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47"] Nov 22 11:15:00 crc kubenswrapper[4772]: E1122 11:15:00.139919 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="registry-server" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.140028 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="registry-server" Nov 22 11:15:00 crc kubenswrapper[4772]: E1122 11:15:00.140165 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="extract-utilities" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.140240 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="extract-utilities" Nov 22 11:15:00 crc kubenswrapper[4772]: E1122 11:15:00.140324 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="extract-content" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.140395 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="extract-content" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.140631 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b87e72c-5b30-4769-8b84-1e5876dbf75b" containerName="registry-server" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.141502 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.145458 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.146469 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.148770 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47"] Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.154361 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-config-volume\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.154406 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-secret-volume\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.154502 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gjvw\" (UniqueName: \"kubernetes.io/projected/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-kube-api-access-5gjvw\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.255369 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gjvw\" (UniqueName: \"kubernetes.io/projected/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-kube-api-access-5gjvw\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.255624 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-config-volume\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.255725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-secret-volume\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.256656 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-config-volume\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.261083 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-secret-volume\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.270614 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gjvw\" (UniqueName: \"kubernetes.io/projected/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-kube-api-access-5gjvw\") pod \"collect-profiles-29396835-n9b47\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.467372 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:00 crc kubenswrapper[4772]: I1122 11:15:00.863845 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47"] Nov 22 11:15:01 crc kubenswrapper[4772]: I1122 11:15:01.786242 4772 generic.go:334] "Generic (PLEG): container finished" podID="ac351ae8-8ede-44d3-9390-1ebadf4b42f7" containerID="b48d820e88511b4cd4f0f93f619a2497fcda6e5e1eca24fe089a20cd441c80f8" exitCode=0 Nov 22 11:15:01 crc kubenswrapper[4772]: I1122 11:15:01.786298 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" event={"ID":"ac351ae8-8ede-44d3-9390-1ebadf4b42f7","Type":"ContainerDied","Data":"b48d820e88511b4cd4f0f93f619a2497fcda6e5e1eca24fe089a20cd441c80f8"} Nov 22 11:15:01 crc kubenswrapper[4772]: I1122 11:15:01.786337 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" event={"ID":"ac351ae8-8ede-44d3-9390-1ebadf4b42f7","Type":"ContainerStarted","Data":"7c536db14014e939bf948031c2d4f61904c02611cf8438a228ca05287da63728"} Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.049039 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.099759 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gjvw\" (UniqueName: \"kubernetes.io/projected/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-kube-api-access-5gjvw\") pod \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.099890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-secret-volume\") pod \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.099945 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-config-volume\") pod \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\" (UID: \"ac351ae8-8ede-44d3-9390-1ebadf4b42f7\") " Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.100535 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac351ae8-8ede-44d3-9390-1ebadf4b42f7" (UID: "ac351ae8-8ede-44d3-9390-1ebadf4b42f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.105478 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-kube-api-access-5gjvw" (OuterVolumeSpecName: "kube-api-access-5gjvw") pod "ac351ae8-8ede-44d3-9390-1ebadf4b42f7" (UID: "ac351ae8-8ede-44d3-9390-1ebadf4b42f7"). InnerVolumeSpecName "kube-api-access-5gjvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.105782 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac351ae8-8ede-44d3-9390-1ebadf4b42f7" (UID: "ac351ae8-8ede-44d3-9390-1ebadf4b42f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.201503 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gjvw\" (UniqueName: \"kubernetes.io/projected/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-kube-api-access-5gjvw\") on node \"crc\" DevicePath \"\"" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.201546 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.201560 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac351ae8-8ede-44d3-9390-1ebadf4b42f7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.800988 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" event={"ID":"ac351ae8-8ede-44d3-9390-1ebadf4b42f7","Type":"ContainerDied","Data":"7c536db14014e939bf948031c2d4f61904c02611cf8438a228ca05287da63728"} Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.801027 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c536db14014e939bf948031c2d4f61904c02611cf8438a228ca05287da63728" Nov 22 11:15:03 crc kubenswrapper[4772]: I1122 11:15:03.801042 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47" Nov 22 11:15:04 crc kubenswrapper[4772]: I1122 11:15:04.119839 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw"] Nov 22 11:15:04 crc kubenswrapper[4772]: I1122 11:15:04.126973 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396790-lvhxw"] Nov 22 11:15:05 crc kubenswrapper[4772]: I1122 11:15:05.421961 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5917cd-29a5-4e07-b030-24456d6b0da6" path="/var/lib/kubelet/pods/df5917cd-29a5-4e07-b030-24456d6b0da6/volumes" Nov 22 11:15:09 crc kubenswrapper[4772]: I1122 11:15:09.407332 4772 scope.go:117] "RemoveContainer" containerID="e9b2f22fc821099bb5a943fb9f80c140dab566532049aa206589d320342d7298" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.448858 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74bkr"] Nov 22 11:15:12 crc kubenswrapper[4772]: E1122 11:15:12.450313 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac351ae8-8ede-44d3-9390-1ebadf4b42f7" containerName="collect-profiles" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.450330 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac351ae8-8ede-44d3-9390-1ebadf4b42f7" containerName="collect-profiles" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.450517 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac351ae8-8ede-44d3-9390-1ebadf4b42f7" containerName="collect-profiles" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.451611 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.465442 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bkr"] Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.536803 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-utilities\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.536873 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-catalog-content\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.536931 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjtrf\" (UniqueName: \"kubernetes.io/projected/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-kube-api-access-wjtrf\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.639956 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-utilities\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.640058 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-catalog-content\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.640106 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtrf\" (UniqueName: \"kubernetes.io/projected/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-kube-api-access-wjtrf\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.641016 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-utilities\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.641204 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-catalog-content\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.664040 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtrf\" (UniqueName: \"kubernetes.io/projected/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-kube-api-access-wjtrf\") pod \"redhat-marketplace-74bkr\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:12 crc kubenswrapper[4772]: I1122 11:15:12.784382 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:13 crc kubenswrapper[4772]: I1122 11:15:13.211728 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bkr"] Nov 22 11:15:13 crc kubenswrapper[4772]: I1122 11:15:13.867823 4772 generic.go:334] "Generic (PLEG): container finished" podID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerID="4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f" exitCode=0 Nov 22 11:15:13 crc kubenswrapper[4772]: I1122 11:15:13.867862 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bkr" event={"ID":"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b","Type":"ContainerDied","Data":"4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f"} Nov 22 11:15:13 crc kubenswrapper[4772]: I1122 11:15:13.867890 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bkr" event={"ID":"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b","Type":"ContainerStarted","Data":"47729e1aa195c1c5ee68b25ecfee93d5878b30dd9b3d6c8caf6ffe90dd211724"} Nov 22 11:15:14 crc kubenswrapper[4772]: I1122 11:15:14.876535 4772 generic.go:334] "Generic (PLEG): container finished" podID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerID="ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9" exitCode=0 Nov 22 11:15:14 crc kubenswrapper[4772]: I1122 11:15:14.876619 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bkr" event={"ID":"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b","Type":"ContainerDied","Data":"ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9"} Nov 22 11:15:16 crc kubenswrapper[4772]: I1122 11:15:16.894317 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bkr" event={"ID":"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b","Type":"ContainerStarted","Data":"7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94"} Nov 22 11:15:16 crc kubenswrapper[4772]: I1122 11:15:16.920328 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74bkr" podStartSLOduration=3.14111379 podStartE2EDuration="4.920289736s" podCreationTimestamp="2025-11-22 11:15:12 +0000 UTC" firstStartedPulling="2025-11-22 11:15:13.869616873 +0000 UTC m=+2234.109061367" lastFinishedPulling="2025-11-22 11:15:15.648792819 +0000 UTC m=+2235.888237313" observedRunningTime="2025-11-22 11:15:16.915856475 +0000 UTC m=+2237.155300969" watchObservedRunningTime="2025-11-22 11:15:16.920289736 +0000 UTC m=+2237.159734260" Nov 22 11:15:22 crc kubenswrapper[4772]: I1122 11:15:22.784805 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:22 crc kubenswrapper[4772]: I1122 11:15:22.785240 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:22 crc kubenswrapper[4772]: I1122 11:15:22.828010 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:22 crc kubenswrapper[4772]: I1122 11:15:22.978178 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:23 crc kubenswrapper[4772]: I1122 11:15:23.063952 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bkr"] Nov 22 11:15:24 crc kubenswrapper[4772]: I1122 11:15:24.944957 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74bkr" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="registry-server" containerID="cri-o://7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94" gracePeriod=2 Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.308704 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.420574 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjtrf\" (UniqueName: \"kubernetes.io/projected/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-kube-api-access-wjtrf\") pod \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.420745 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-utilities\") pod \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.420797 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-catalog-content\") pod \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\" (UID: \"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b\") " Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.422758 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-utilities" (OuterVolumeSpecName: "utilities") pod "6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" (UID: "6230f32f-af32-4e96-a2a3-9e9a2d0dc66b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.430832 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-kube-api-access-wjtrf" (OuterVolumeSpecName: "kube-api-access-wjtrf") pod "6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" (UID: "6230f32f-af32-4e96-a2a3-9e9a2d0dc66b"). InnerVolumeSpecName "kube-api-access-wjtrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.444477 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" (UID: "6230f32f-af32-4e96-a2a3-9e9a2d0dc66b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.522857 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.522894 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.522908 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjtrf\" (UniqueName: \"kubernetes.io/projected/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b-kube-api-access-wjtrf\") on node \"crc\" DevicePath \"\"" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.952712 4772 generic.go:334] "Generic (PLEG): container finished" podID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerID="7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94" exitCode=0 Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.952779 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bkr" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.952788 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bkr" event={"ID":"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b","Type":"ContainerDied","Data":"7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94"} Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.953434 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bkr" event={"ID":"6230f32f-af32-4e96-a2a3-9e9a2d0dc66b","Type":"ContainerDied","Data":"47729e1aa195c1c5ee68b25ecfee93d5878b30dd9b3d6c8caf6ffe90dd211724"} Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.953460 4772 scope.go:117] "RemoveContainer" containerID="7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.983674 4772 scope.go:117] "RemoveContainer" containerID="ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9" Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.992078 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bkr"] Nov 22 11:15:25 crc kubenswrapper[4772]: I1122 11:15:25.998293 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bkr"] Nov 22 11:15:26 crc kubenswrapper[4772]: I1122 11:15:26.005773 4772 scope.go:117] "RemoveContainer" containerID="4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f" Nov 22 11:15:26 crc kubenswrapper[4772]: I1122 11:15:26.043778 4772 scope.go:117] "RemoveContainer" containerID="7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94" Nov 22 11:15:26 crc kubenswrapper[4772]: E1122 11:15:26.045923 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94\": container with ID starting with 7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94 not found: ID does not exist" containerID="7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94" Nov 22 11:15:26 crc kubenswrapper[4772]: I1122 11:15:26.045967 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94"} err="failed to get container status \"7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94\": rpc error: code = NotFound desc = could not find container \"7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94\": container with ID starting with 7d90a0b0991418156ae9f1905a65da4ac3cb625fbb708b744cbbf9b9426ddc94 not found: ID does not exist" Nov 22 11:15:26 crc kubenswrapper[4772]: I1122 11:15:26.045990 4772 scope.go:117] "RemoveContainer" containerID="ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9" Nov 22 11:15:26 crc kubenswrapper[4772]: E1122 11:15:26.047684 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9\": container with ID starting with ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9 not found: ID does not exist" containerID="ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9" Nov 22 11:15:26 crc kubenswrapper[4772]: I1122 11:15:26.047781 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9"} err="failed to get container status \"ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9\": rpc error: code = NotFound desc = could not find container \"ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9\": container with ID starting with ba93bc34548b2f49216da729abc90a14a6658fa6b2697674a0b47455229737c9 not found: ID does not exist" Nov 22 11:15:26 crc kubenswrapper[4772]: I1122 11:15:26.047845 4772 scope.go:117] "RemoveContainer" containerID="4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f" Nov 22 11:15:26 crc kubenswrapper[4772]: E1122 11:15:26.048234 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f\": container with ID starting with 4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f not found: ID does not exist" containerID="4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f" Nov 22 11:15:26 crc kubenswrapper[4772]: I1122 11:15:26.048276 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f"} err="failed to get container status \"4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f\": rpc error: code = NotFound desc = could not find container \"4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f\": container with ID starting with 4ef9b31952fc052e5df9b16dfb403823c066167b982512d978b14096746d3f1f not found: ID does not exist" Nov 22 11:15:27 crc kubenswrapper[4772]: I1122 11:15:27.422525 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" path="/var/lib/kubelet/pods/6230f32f-af32-4e96-a2a3-9e9a2d0dc66b/volumes" Nov 22 11:15:31 crc kubenswrapper[4772]: I1122 11:15:31.532693 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:15:31 crc kubenswrapper[4772]: I1122 11:15:31.533105 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:16:01 crc kubenswrapper[4772]: I1122 11:16:01.532966 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:16:01 crc kubenswrapper[4772]: I1122 11:16:01.533510 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:16:31 crc kubenswrapper[4772]: I1122 11:16:31.532841 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:16:31 crc kubenswrapper[4772]: I1122 11:16:31.534482 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:16:31 crc kubenswrapper[4772]: I1122 11:16:31.534631 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:16:31 crc kubenswrapper[4772]: I1122 11:16:31.535446 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:16:31 crc kubenswrapper[4772]: I1122 11:16:31.535596 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" gracePeriod=600 Nov 22 11:16:31 crc kubenswrapper[4772]: E1122 11:16:31.663238 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:16:32 crc kubenswrapper[4772]: I1122 11:16:32.437034 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" exitCode=0 Nov 22 11:16:32 crc kubenswrapper[4772]: I1122 11:16:32.437120 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e"} Nov 22 11:16:32 crc kubenswrapper[4772]: I1122 11:16:32.437154 4772 scope.go:117] "RemoveContainer" containerID="5e4c663c90bf28427705b49fecfbb62fa2d6fccd4685ea4cb55fb6637ad86c41" Nov 22 11:16:32 crc kubenswrapper[4772]: I1122 11:16:32.438138 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:16:32 crc kubenswrapper[4772]: E1122 11:16:32.438700 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:16:44 crc kubenswrapper[4772]: I1122 11:16:44.413876 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:16:44 crc kubenswrapper[4772]: E1122 11:16:44.414421 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:16:56 crc kubenswrapper[4772]: I1122 11:16:56.413415 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:16:56 crc kubenswrapper[4772]: E1122 11:16:56.414512 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:17:09 crc kubenswrapper[4772]: I1122 11:17:09.412971 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:17:09 crc kubenswrapper[4772]: E1122 11:17:09.413663 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:17:20 crc kubenswrapper[4772]: I1122 11:17:20.414122 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:17:20 crc kubenswrapper[4772]: E1122 11:17:20.414841 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:17:31 crc kubenswrapper[4772]: I1122 11:17:31.422467 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:17:31 crc kubenswrapper[4772]: E1122 11:17:31.423248 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:17:44 crc kubenswrapper[4772]: I1122 11:17:44.414207 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:17:44 crc kubenswrapper[4772]: E1122 11:17:44.415156 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:17:56 crc kubenswrapper[4772]: I1122 11:17:56.413592 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:17:56 crc kubenswrapper[4772]: E1122 11:17:56.414291 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:18:08 crc kubenswrapper[4772]: I1122 11:18:08.413553 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:18:08 crc kubenswrapper[4772]: E1122 11:18:08.414454 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:18:22 crc kubenswrapper[4772]: I1122 11:18:22.413861 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:18:22 crc kubenswrapper[4772]: E1122 11:18:22.414567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:18:33 crc kubenswrapper[4772]: I1122 11:18:33.413525 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:18:33 crc kubenswrapper[4772]: E1122 11:18:33.414373 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:18:44 crc kubenswrapper[4772]: I1122 11:18:44.413806 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:18:44 crc kubenswrapper[4772]: E1122 11:18:44.414612 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:18:58 crc kubenswrapper[4772]: I1122 11:18:58.413470 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:18:58 crc kubenswrapper[4772]: E1122 11:18:58.414335 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:19:12 crc kubenswrapper[4772]: I1122 11:19:12.414015 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:19:12 crc kubenswrapper[4772]: E1122 11:19:12.414967 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:19:27 crc kubenswrapper[4772]: I1122 11:19:27.413908 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:19:27 crc kubenswrapper[4772]: E1122 11:19:27.414937 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:19:39 crc kubenswrapper[4772]: I1122 11:19:39.417445 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:19:39 crc kubenswrapper[4772]: E1122 11:19:39.418740 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:19:50 crc kubenswrapper[4772]: I1122 11:19:50.413756 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:19:50 crc kubenswrapper[4772]: E1122 11:19:50.417129 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:20:04 crc kubenswrapper[4772]: I1122 11:20:04.414614 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:20:04 crc kubenswrapper[4772]: E1122 11:20:04.415855 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:20:18 crc kubenswrapper[4772]: I1122 11:20:18.413856 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:20:18 crc kubenswrapper[4772]: E1122 11:20:18.414613 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:20:31 crc kubenswrapper[4772]: I1122 11:20:31.419456 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:20:31 crc kubenswrapper[4772]: E1122 11:20:31.420455 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:20:43 crc kubenswrapper[4772]: I1122 11:20:43.413726 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:20:43 crc kubenswrapper[4772]: E1122 11:20:43.414453 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:20:58 crc kubenswrapper[4772]: I1122 11:20:58.413715 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:20:58 crc kubenswrapper[4772]: E1122 11:20:58.415572 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:21:11 crc kubenswrapper[4772]: I1122 11:21:11.418497 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:21:11 crc kubenswrapper[4772]: E1122 11:21:11.419283 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:21:22 crc kubenswrapper[4772]: I1122 11:21:22.414107 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:21:22 crc kubenswrapper[4772]: E1122 11:21:22.414984 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:21:33 crc kubenswrapper[4772]: I1122 11:21:33.414295 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:21:33 crc kubenswrapper[4772]: I1122 11:21:33.749624 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"a8e794cde97f20f186c6f9ba6fb2f3875c615dce4e7b795bab621b0e98f21876"} Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.666618 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zbf8p"] Nov 22 11:23:28 crc kubenswrapper[4772]: E1122 11:23:28.667676 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="extract-utilities" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.667694 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="extract-utilities" Nov 22 11:23:28 crc kubenswrapper[4772]: E1122 11:23:28.667706 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="registry-server" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.667714 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="registry-server" Nov 22 11:23:28 crc kubenswrapper[4772]: E1122 11:23:28.667741 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="extract-content" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.667749 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="extract-content" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.667930 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6230f32f-af32-4e96-a2a3-9e9a2d0dc66b" containerName="registry-server" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.669289 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.678188 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbf8p"] Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.776125 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-catalog-content\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.778563 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnskw\" (UniqueName: \"kubernetes.io/projected/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-kube-api-access-xnskw\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.778716 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-utilities\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.879912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-catalog-content\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.879974 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnskw\" (UniqueName: \"kubernetes.io/projected/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-kube-api-access-xnskw\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.880027 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-utilities\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.880484 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-catalog-content\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.880559 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-utilities\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.900175 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnskw\" (UniqueName: \"kubernetes.io/projected/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-kube-api-access-xnskw\") pod \"community-operators-zbf8p\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:28 crc kubenswrapper[4772]: I1122 11:23:28.992914 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:29 crc kubenswrapper[4772]: I1122 11:23:29.484623 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbf8p"] Nov 22 11:23:29 crc kubenswrapper[4772]: I1122 11:23:29.641320 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerStarted","Data":"c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e"} Nov 22 11:23:29 crc kubenswrapper[4772]: I1122 11:23:29.641367 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerStarted","Data":"ed4b92d00ea5cc4abf522e622ad445471838d7b6f8f8a9bd85c5248c191a791b"} Nov 22 11:23:30 crc kubenswrapper[4772]: I1122 11:23:30.651804 4772 generic.go:334] "Generic (PLEG): container finished" podID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerID="c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e" exitCode=0 Nov 22 11:23:30 crc kubenswrapper[4772]: I1122 11:23:30.651877 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerDied","Data":"c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e"} Nov 22 11:23:30 crc kubenswrapper[4772]: I1122 11:23:30.655315 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 11:23:31 crc kubenswrapper[4772]: I1122 11:23:31.662961 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerStarted","Data":"084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c"} Nov 22 11:23:32 crc kubenswrapper[4772]: I1122 11:23:32.671062 4772 generic.go:334] "Generic (PLEG): container finished" podID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerID="084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c" exitCode=0 Nov 22 11:23:32 crc kubenswrapper[4772]: I1122 11:23:32.671071 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerDied","Data":"084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c"} Nov 22 11:23:33 crc kubenswrapper[4772]: I1122 11:23:33.679869 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerStarted","Data":"2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c"} Nov 22 11:23:33 crc kubenswrapper[4772]: I1122 11:23:33.701751 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zbf8p" podStartSLOduration=3.258692191 podStartE2EDuration="5.701735557s" podCreationTimestamp="2025-11-22 11:23:28 +0000 UTC" firstStartedPulling="2025-11-22 11:23:30.655074627 +0000 UTC m=+2730.894519121" lastFinishedPulling="2025-11-22 11:23:33.098117993 +0000 UTC m=+2733.337562487" observedRunningTime="2025-11-22 11:23:33.698324482 +0000 UTC m=+2733.937768976" watchObservedRunningTime="2025-11-22 11:23:33.701735557 +0000 UTC m=+2733.941180051" Nov 22 11:23:38 crc kubenswrapper[4772]: I1122 11:23:38.994089 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:38 crc kubenswrapper[4772]: I1122 11:23:38.994786 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:39 crc kubenswrapper[4772]: I1122 11:23:39.051658 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:39 crc kubenswrapper[4772]: I1122 11:23:39.813436 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:39 crc kubenswrapper[4772]: I1122 11:23:39.874176 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbf8p"] Nov 22 11:23:41 crc kubenswrapper[4772]: I1122 11:23:41.757459 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zbf8p" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="registry-server" containerID="cri-o://2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c" gracePeriod=2 Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.230644 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.288633 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-utilities\") pod \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.288732 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnskw\" (UniqueName: \"kubernetes.io/projected/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-kube-api-access-xnskw\") pod \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.288833 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-catalog-content\") pod \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\" (UID: \"ccfe0f72-a621-4384-9d5d-bf86a2f6d356\") " Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.289795 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-utilities" (OuterVolumeSpecName: "utilities") pod "ccfe0f72-a621-4384-9d5d-bf86a2f6d356" (UID: "ccfe0f72-a621-4384-9d5d-bf86a2f6d356"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.295875 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-kube-api-access-xnskw" (OuterVolumeSpecName: "kube-api-access-xnskw") pod "ccfe0f72-a621-4384-9d5d-bf86a2f6d356" (UID: "ccfe0f72-a621-4384-9d5d-bf86a2f6d356"). InnerVolumeSpecName "kube-api-access-xnskw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.390266 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.390566 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnskw\" (UniqueName: \"kubernetes.io/projected/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-kube-api-access-xnskw\") on node \"crc\" DevicePath \"\"" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.766776 4772 generic.go:334] "Generic (PLEG): container finished" podID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerID="2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c" exitCode=0 Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.766837 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerDied","Data":"2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c"} Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.766881 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbf8p" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.766939 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbf8p" event={"ID":"ccfe0f72-a621-4384-9d5d-bf86a2f6d356","Type":"ContainerDied","Data":"ed4b92d00ea5cc4abf522e622ad445471838d7b6f8f8a9bd85c5248c191a791b"} Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.766969 4772 scope.go:117] "RemoveContainer" containerID="2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.793602 4772 scope.go:117] "RemoveContainer" containerID="084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.829181 4772 scope.go:117] "RemoveContainer" containerID="c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.850811 4772 scope.go:117] "RemoveContainer" containerID="2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c" Nov 22 11:23:42 crc kubenswrapper[4772]: E1122 11:23:42.851432 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c\": container with ID starting with 2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c not found: ID does not exist" containerID="2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.851464 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c"} err="failed to get container status \"2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c\": rpc error: code = NotFound desc = could not find container \"2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c\": container with ID starting with 2e07d11c6d17eca43e7303cd260ae0a18b7bb802c5d88699290c6af2bf2f1a3c not found: ID does not exist" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.851487 4772 scope.go:117] "RemoveContainer" containerID="084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c" Nov 22 11:23:42 crc kubenswrapper[4772]: E1122 11:23:42.851897 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c\": container with ID starting with 084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c not found: ID does not exist" containerID="084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.851916 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c"} err="failed to get container status \"084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c\": rpc error: code = NotFound desc = could not find container \"084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c\": container with ID starting with 084d894e5eb0c939f625139e691b0bddb6faffc7d01f7f4297d12a09df1c1c6c not found: ID does not exist" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.851927 4772 scope.go:117] "RemoveContainer" containerID="c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e" Nov 22 11:23:42 crc kubenswrapper[4772]: E1122 11:23:42.852247 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e\": container with ID starting with c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e not found: ID does not exist" containerID="c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e" Nov 22 11:23:42 crc kubenswrapper[4772]: I1122 11:23:42.852284 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e"} err="failed to get container status \"c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e\": rpc error: code = NotFound desc = could not find container \"c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e\": container with ID starting with c52a1446e00e4c9bdd951ef58b9be57f73bd68585cf247b355774d2865ba325e not found: ID does not exist" Nov 22 11:23:43 crc kubenswrapper[4772]: I1122 11:23:43.497821 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccfe0f72-a621-4384-9d5d-bf86a2f6d356" (UID: "ccfe0f72-a621-4384-9d5d-bf86a2f6d356"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:23:43 crc kubenswrapper[4772]: I1122 11:23:43.504469 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccfe0f72-a621-4384-9d5d-bf86a2f6d356-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:23:43 crc kubenswrapper[4772]: I1122 11:23:43.704269 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbf8p"] Nov 22 11:23:43 crc kubenswrapper[4772]: I1122 11:23:43.710996 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zbf8p"] Nov 22 11:23:45 crc kubenswrapper[4772]: I1122 11:23:45.431198 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" path="/var/lib/kubelet/pods/ccfe0f72-a621-4384-9d5d-bf86a2f6d356/volumes" Nov 22 11:24:01 crc kubenswrapper[4772]: I1122 11:24:01.533122 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:24:01 crc kubenswrapper[4772]: I1122 11:24:01.533968 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.523709 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-trvl6"] Nov 22 11:24:08 crc kubenswrapper[4772]: E1122 11:24:08.524729 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="extract-utilities" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.524749 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="extract-utilities" Nov 22 11:24:08 crc kubenswrapper[4772]: E1122 11:24:08.524825 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="extract-content" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.524836 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="extract-content" Nov 22 11:24:08 crc kubenswrapper[4772]: E1122 11:24:08.524851 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="registry-server" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.524859 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="registry-server" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.525013 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfe0f72-a621-4384-9d5d-bf86a2f6d356" containerName="registry-server" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.526217 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.538692 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-trvl6"] Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.625272 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-catalog-content\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.625424 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mch\" (UniqueName: \"kubernetes.io/projected/70dae412-1adc-41d9-b448-dc3f78a6b383-kube-api-access-t2mch\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.625460 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-utilities\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.726562 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2mch\" (UniqueName: \"kubernetes.io/projected/70dae412-1adc-41d9-b448-dc3f78a6b383-kube-api-access-t2mch\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.726622 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-utilities\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.726652 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-catalog-content\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.727357 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-catalog-content\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.727400 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-utilities\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.809722 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2mch\" (UniqueName: \"kubernetes.io/projected/70dae412-1adc-41d9-b448-dc3f78a6b383-kube-api-access-t2mch\") pod \"redhat-operators-trvl6\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:08 crc kubenswrapper[4772]: I1122 11:24:08.848460 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:09 crc kubenswrapper[4772]: I1122 11:24:09.277501 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-trvl6"] Nov 22 11:24:09 crc kubenswrapper[4772]: I1122 11:24:09.989355 4772 generic.go:334] "Generic (PLEG): container finished" podID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerID="7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93" exitCode=0 Nov 22 11:24:09 crc kubenswrapper[4772]: I1122 11:24:09.989662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trvl6" event={"ID":"70dae412-1adc-41d9-b448-dc3f78a6b383","Type":"ContainerDied","Data":"7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93"} Nov 22 11:24:09 crc kubenswrapper[4772]: I1122 11:24:09.989781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trvl6" event={"ID":"70dae412-1adc-41d9-b448-dc3f78a6b383","Type":"ContainerStarted","Data":"4b2a4c674b1eb994cab11345476ad39b878bf3f32c1dcac5b20c0ceba7bd526d"} Nov 22 11:24:11 crc kubenswrapper[4772]: I1122 11:24:11.002851 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trvl6" event={"ID":"70dae412-1adc-41d9-b448-dc3f78a6b383","Type":"ContainerStarted","Data":"9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4"} Nov 22 11:24:12 crc kubenswrapper[4772]: I1122 11:24:12.012587 4772 generic.go:334] "Generic (PLEG): container finished" podID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerID="9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4" exitCode=0 Nov 22 11:24:12 crc kubenswrapper[4772]: I1122 11:24:12.012635 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trvl6" event={"ID":"70dae412-1adc-41d9-b448-dc3f78a6b383","Type":"ContainerDied","Data":"9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4"} Nov 22 11:24:13 crc kubenswrapper[4772]: I1122 11:24:13.023401 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trvl6" event={"ID":"70dae412-1adc-41d9-b448-dc3f78a6b383","Type":"ContainerStarted","Data":"60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972"} Nov 22 11:24:13 crc kubenswrapper[4772]: I1122 11:24:13.041286 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-trvl6" podStartSLOduration=2.611154484 podStartE2EDuration="5.041258545s" podCreationTimestamp="2025-11-22 11:24:08 +0000 UTC" firstStartedPulling="2025-11-22 11:24:09.991759703 +0000 UTC m=+2770.231204197" lastFinishedPulling="2025-11-22 11:24:12.421863764 +0000 UTC m=+2772.661308258" observedRunningTime="2025-11-22 11:24:13.039217283 +0000 UTC m=+2773.278661807" watchObservedRunningTime="2025-11-22 11:24:13.041258545 +0000 UTC m=+2773.280703069" Nov 22 11:24:18 crc kubenswrapper[4772]: I1122 11:24:18.849643 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:18 crc kubenswrapper[4772]: I1122 11:24:18.850123 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:18 crc kubenswrapper[4772]: I1122 11:24:18.946457 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:19 crc kubenswrapper[4772]: I1122 11:24:19.117113 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:19 crc kubenswrapper[4772]: I1122 11:24:19.182823 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-trvl6"] Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.096541 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-trvl6" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="registry-server" containerID="cri-o://60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972" gracePeriod=2 Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.613210 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.792398 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-catalog-content\") pod \"70dae412-1adc-41d9-b448-dc3f78a6b383\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.792863 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2mch\" (UniqueName: \"kubernetes.io/projected/70dae412-1adc-41d9-b448-dc3f78a6b383-kube-api-access-t2mch\") pod \"70dae412-1adc-41d9-b448-dc3f78a6b383\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.792931 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-utilities\") pod \"70dae412-1adc-41d9-b448-dc3f78a6b383\" (UID: \"70dae412-1adc-41d9-b448-dc3f78a6b383\") " Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.793860 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-utilities" (OuterVolumeSpecName: "utilities") pod "70dae412-1adc-41d9-b448-dc3f78a6b383" (UID: "70dae412-1adc-41d9-b448-dc3f78a6b383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.799894 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70dae412-1adc-41d9-b448-dc3f78a6b383-kube-api-access-t2mch" (OuterVolumeSpecName: "kube-api-access-t2mch") pod "70dae412-1adc-41d9-b448-dc3f78a6b383" (UID: "70dae412-1adc-41d9-b448-dc3f78a6b383"). InnerVolumeSpecName "kube-api-access-t2mch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.886766 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70dae412-1adc-41d9-b448-dc3f78a6b383" (UID: "70dae412-1adc-41d9-b448-dc3f78a6b383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.894600 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2mch\" (UniqueName: \"kubernetes.io/projected/70dae412-1adc-41d9-b448-dc3f78a6b383-kube-api-access-t2mch\") on node \"crc\" DevicePath \"\"" Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.894643 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:24:21 crc kubenswrapper[4772]: I1122 11:24:21.894665 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70dae412-1adc-41d9-b448-dc3f78a6b383-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.108933 4772 generic.go:334] "Generic (PLEG): container finished" podID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerID="60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972" exitCode=0 Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.108980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trvl6" event={"ID":"70dae412-1adc-41d9-b448-dc3f78a6b383","Type":"ContainerDied","Data":"60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972"} Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.109009 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trvl6" event={"ID":"70dae412-1adc-41d9-b448-dc3f78a6b383","Type":"ContainerDied","Data":"4b2a4c674b1eb994cab11345476ad39b878bf3f32c1dcac5b20c0ceba7bd526d"} Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.109029 4772 scope.go:117] "RemoveContainer" containerID="60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.109143 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trvl6" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.134897 4772 scope.go:117] "RemoveContainer" containerID="9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.145267 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-trvl6"] Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.151181 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-trvl6"] Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.177250 4772 scope.go:117] "RemoveContainer" containerID="7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.193251 4772 scope.go:117] "RemoveContainer" containerID="60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972" Nov 22 11:24:22 crc kubenswrapper[4772]: E1122 11:24:22.193672 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972\": container with ID starting with 60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972 not found: ID does not exist" containerID="60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.193716 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972"} err="failed to get container status \"60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972\": rpc error: code = NotFound desc = could not find container \"60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972\": container with ID starting with 60ad72ed6f27cbb1309a8b9be368d9e5612101dc166f3cb5506833ff25be2972 not found: ID does not exist" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.193743 4772 scope.go:117] "RemoveContainer" containerID="9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4" Nov 22 11:24:22 crc kubenswrapper[4772]: E1122 11:24:22.194033 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4\": container with ID starting with 9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4 not found: ID does not exist" containerID="9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.194080 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4"} err="failed to get container status \"9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4\": rpc error: code = NotFound desc = could not find container \"9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4\": container with ID starting with 9d8a05bba873917e4d86ed5feac3f2577cd6e1803a8dca3c0f2f858fb4cc44f4 not found: ID does not exist" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.194102 4772 scope.go:117] "RemoveContainer" containerID="7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93" Nov 22 11:24:22 crc kubenswrapper[4772]: E1122 11:24:22.194482 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93\": container with ID starting with 7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93 not found: ID does not exist" containerID="7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93" Nov 22 11:24:22 crc kubenswrapper[4772]: I1122 11:24:22.194509 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93"} err="failed to get container status \"7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93\": rpc error: code = NotFound desc = could not find container \"7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93\": container with ID starting with 7e6eb914e03d5286e2636f8e672f2d45f721d4c4330748ac9238f4bda27c0f93 not found: ID does not exist" Nov 22 11:24:23 crc kubenswrapper[4772]: I1122 11:24:23.423440 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" path="/var/lib/kubelet/pods/70dae412-1adc-41d9-b448-dc3f78a6b383/volumes" Nov 22 11:24:31 crc kubenswrapper[4772]: I1122 11:24:31.533244 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:24:31 crc kubenswrapper[4772]: I1122 11:24:31.533801 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:25:01 crc kubenswrapper[4772]: I1122 11:25:01.532874 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:25:01 crc kubenswrapper[4772]: I1122 11:25:01.533541 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:25:01 crc kubenswrapper[4772]: I1122 11:25:01.533629 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:25:01 crc kubenswrapper[4772]: I1122 11:25:01.534600 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8e794cde97f20f186c6f9ba6fb2f3875c615dce4e7b795bab621b0e98f21876"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:25:01 crc kubenswrapper[4772]: I1122 11:25:01.534678 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://a8e794cde97f20f186c6f9ba6fb2f3875c615dce4e7b795bab621b0e98f21876" gracePeriod=600 Nov 22 11:25:02 crc kubenswrapper[4772]: I1122 11:25:02.448663 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="a8e794cde97f20f186c6f9ba6fb2f3875c615dce4e7b795bab621b0e98f21876" exitCode=0 Nov 22 11:25:02 crc kubenswrapper[4772]: I1122 11:25:02.448787 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"a8e794cde97f20f186c6f9ba6fb2f3875c615dce4e7b795bab621b0e98f21876"} Nov 22 11:25:02 crc kubenswrapper[4772]: I1122 11:25:02.449708 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4"} Nov 22 11:25:02 crc kubenswrapper[4772]: I1122 11:25:02.449747 4772 scope.go:117] "RemoveContainer" containerID="0bc0f3ff7ead827f9165f4665458e8a286ac7a9b3bae4573b079c545ed50c71e" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.125477 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gpgcx"] Nov 22 11:25:55 crc kubenswrapper[4772]: E1122 11:25:55.126436 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="extract-utilities" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.126456 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="extract-utilities" Nov 22 11:25:55 crc kubenswrapper[4772]: E1122 11:25:55.126470 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="extract-content" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.126479 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="extract-content" Nov 22 11:25:55 crc kubenswrapper[4772]: E1122 11:25:55.126503 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="registry-server" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.126512 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="registry-server" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.126682 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="70dae412-1adc-41d9-b448-dc3f78a6b383" containerName="registry-server" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.127971 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.139618 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpgcx"] Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.285416 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-utilities\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.285470 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-catalog-content\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.285929 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbzq\" (UniqueName: \"kubernetes.io/projected/39961b6a-a8e8-41f4-b144-0c10ced9af89-kube-api-access-2dbzq\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.387381 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbzq\" (UniqueName: \"kubernetes.io/projected/39961b6a-a8e8-41f4-b144-0c10ced9af89-kube-api-access-2dbzq\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.387440 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-utilities\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.387467 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-catalog-content\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.387940 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-catalog-content\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.388066 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-utilities\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.406921 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbzq\" (UniqueName: \"kubernetes.io/projected/39961b6a-a8e8-41f4-b144-0c10ced9af89-kube-api-access-2dbzq\") pod \"certified-operators-gpgcx\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.490027 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:25:55 crc kubenswrapper[4772]: I1122 11:25:55.943934 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpgcx"] Nov 22 11:25:56 crc kubenswrapper[4772]: I1122 11:25:56.863062 4772 generic.go:334] "Generic (PLEG): container finished" podID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerID="e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51" exitCode=0 Nov 22 11:25:56 crc kubenswrapper[4772]: I1122 11:25:56.863106 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpgcx" event={"ID":"39961b6a-a8e8-41f4-b144-0c10ced9af89","Type":"ContainerDied","Data":"e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51"} Nov 22 11:25:56 crc kubenswrapper[4772]: I1122 11:25:56.863133 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpgcx" event={"ID":"39961b6a-a8e8-41f4-b144-0c10ced9af89","Type":"ContainerStarted","Data":"18ab06394aaf40974839d876bf53e42d9dcac4f9ea76ffd5b1755aabc469aa9e"} Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.531965 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-klqb8"] Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.536946 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-klqb8"] Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.537121 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.720072 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-utilities\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.720133 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-catalog-content\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.720176 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtt2x\" (UniqueName: \"kubernetes.io/projected/8579d4ee-ef62-4b1d-ae92-9072ac803b63-kube-api-access-wtt2x\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.825347 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtt2x\" (UniqueName: \"kubernetes.io/projected/8579d4ee-ef62-4b1d-ae92-9072ac803b63-kube-api-access-wtt2x\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.825565 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-utilities\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.825605 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-catalog-content\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.826491 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-utilities\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.827457 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-catalog-content\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.852898 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtt2x\" (UniqueName: \"kubernetes.io/projected/8579d4ee-ef62-4b1d-ae92-9072ac803b63-kube-api-access-wtt2x\") pod \"redhat-marketplace-klqb8\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:57 crc kubenswrapper[4772]: I1122 11:25:57.893843 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:25:58 crc kubenswrapper[4772]: I1122 11:25:58.098139 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-klqb8"] Nov 22 11:25:58 crc kubenswrapper[4772]: W1122 11:25:58.105818 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8579d4ee_ef62_4b1d_ae92_9072ac803b63.slice/crio-68e7eeb3b3e3427aee0e17f796f1631cfa1c997ee16afb3ca078cce48ccc4a9a WatchSource:0}: Error finding container 68e7eeb3b3e3427aee0e17f796f1631cfa1c997ee16afb3ca078cce48ccc4a9a: Status 404 returned error can't find the container with id 68e7eeb3b3e3427aee0e17f796f1631cfa1c997ee16afb3ca078cce48ccc4a9a Nov 22 11:25:58 crc kubenswrapper[4772]: I1122 11:25:58.881457 4772 generic.go:334] "Generic (PLEG): container finished" podID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerID="3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78" exitCode=0 Nov 22 11:25:58 crc kubenswrapper[4772]: I1122 11:25:58.881500 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpgcx" event={"ID":"39961b6a-a8e8-41f4-b144-0c10ced9af89","Type":"ContainerDied","Data":"3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78"} Nov 22 11:25:58 crc kubenswrapper[4772]: I1122 11:25:58.883932 4772 generic.go:334] "Generic (PLEG): container finished" podID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerID="2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13" exitCode=0 Nov 22 11:25:58 crc kubenswrapper[4772]: I1122 11:25:58.883966 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klqb8" event={"ID":"8579d4ee-ef62-4b1d-ae92-9072ac803b63","Type":"ContainerDied","Data":"2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13"} Nov 22 11:25:58 crc kubenswrapper[4772]: I1122 11:25:58.883991 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klqb8" event={"ID":"8579d4ee-ef62-4b1d-ae92-9072ac803b63","Type":"ContainerStarted","Data":"68e7eeb3b3e3427aee0e17f796f1631cfa1c997ee16afb3ca078cce48ccc4a9a"} Nov 22 11:25:59 crc kubenswrapper[4772]: I1122 11:25:59.892737 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpgcx" event={"ID":"39961b6a-a8e8-41f4-b144-0c10ced9af89","Type":"ContainerStarted","Data":"3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926"} Nov 22 11:25:59 crc kubenswrapper[4772]: I1122 11:25:59.894784 4772 generic.go:334] "Generic (PLEG): container finished" podID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerID="851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186" exitCode=0 Nov 22 11:25:59 crc kubenswrapper[4772]: I1122 11:25:59.894825 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klqb8" event={"ID":"8579d4ee-ef62-4b1d-ae92-9072ac803b63","Type":"ContainerDied","Data":"851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186"} Nov 22 11:25:59 crc kubenswrapper[4772]: I1122 11:25:59.912824 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gpgcx" podStartSLOduration=2.484449958 podStartE2EDuration="4.912803074s" podCreationTimestamp="2025-11-22 11:25:55 +0000 UTC" firstStartedPulling="2025-11-22 11:25:56.864457272 +0000 UTC m=+2877.103901766" lastFinishedPulling="2025-11-22 11:25:59.292810388 +0000 UTC m=+2879.532254882" observedRunningTime="2025-11-22 11:25:59.908975508 +0000 UTC m=+2880.148420002" watchObservedRunningTime="2025-11-22 11:25:59.912803074 +0000 UTC m=+2880.152247558" Nov 22 11:26:00 crc kubenswrapper[4772]: I1122 11:26:00.904391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klqb8" event={"ID":"8579d4ee-ef62-4b1d-ae92-9072ac803b63","Type":"ContainerStarted","Data":"4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4"} Nov 22 11:26:00 crc kubenswrapper[4772]: I1122 11:26:00.924793 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-klqb8" podStartSLOduration=2.468277263 podStartE2EDuration="3.924774651s" podCreationTimestamp="2025-11-22 11:25:57 +0000 UTC" firstStartedPulling="2025-11-22 11:25:58.885217015 +0000 UTC m=+2879.124661519" lastFinishedPulling="2025-11-22 11:26:00.341714413 +0000 UTC m=+2880.581158907" observedRunningTime="2025-11-22 11:26:00.918783941 +0000 UTC m=+2881.158228445" watchObservedRunningTime="2025-11-22 11:26:00.924774651 +0000 UTC m=+2881.164219145" Nov 22 11:26:05 crc kubenswrapper[4772]: I1122 11:26:05.490812 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:26:05 crc kubenswrapper[4772]: I1122 11:26:05.491249 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:26:05 crc kubenswrapper[4772]: I1122 11:26:05.542746 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:26:05 crc kubenswrapper[4772]: I1122 11:26:05.984087 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:26:06 crc kubenswrapper[4772]: I1122 11:26:06.020796 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpgcx"] Nov 22 11:26:07 crc kubenswrapper[4772]: I1122 11:26:07.894850 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:26:07 crc kubenswrapper[4772]: I1122 11:26:07.895215 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:26:07 crc kubenswrapper[4772]: I1122 11:26:07.944207 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:26:07 crc kubenswrapper[4772]: I1122 11:26:07.954505 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gpgcx" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="registry-server" containerID="cri-o://3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926" gracePeriod=2 Nov 22 11:26:07 crc kubenswrapper[4772]: I1122 11:26:07.998166 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.333583 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.478801 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dbzq\" (UniqueName: \"kubernetes.io/projected/39961b6a-a8e8-41f4-b144-0c10ced9af89-kube-api-access-2dbzq\") pod \"39961b6a-a8e8-41f4-b144-0c10ced9af89\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.478945 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-catalog-content\") pod \"39961b6a-a8e8-41f4-b144-0c10ced9af89\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.479021 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-utilities\") pod \"39961b6a-a8e8-41f4-b144-0c10ced9af89\" (UID: \"39961b6a-a8e8-41f4-b144-0c10ced9af89\") " Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.479814 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-utilities" (OuterVolumeSpecName: "utilities") pod "39961b6a-a8e8-41f4-b144-0c10ced9af89" (UID: "39961b6a-a8e8-41f4-b144-0c10ced9af89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.483732 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39961b6a-a8e8-41f4-b144-0c10ced9af89-kube-api-access-2dbzq" (OuterVolumeSpecName: "kube-api-access-2dbzq") pod "39961b6a-a8e8-41f4-b144-0c10ced9af89" (UID: "39961b6a-a8e8-41f4-b144-0c10ced9af89"). InnerVolumeSpecName "kube-api-access-2dbzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.529323 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39961b6a-a8e8-41f4-b144-0c10ced9af89" (UID: "39961b6a-a8e8-41f4-b144-0c10ced9af89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.580803 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.580839 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dbzq\" (UniqueName: \"kubernetes.io/projected/39961b6a-a8e8-41f4-b144-0c10ced9af89-kube-api-access-2dbzq\") on node \"crc\" DevicePath \"\"" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.580849 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39961b6a-a8e8-41f4-b144-0c10ced9af89-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.963098 4772 generic.go:334] "Generic (PLEG): container finished" podID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerID="3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926" exitCode=0 Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.963149 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpgcx" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.963166 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpgcx" event={"ID":"39961b6a-a8e8-41f4-b144-0c10ced9af89","Type":"ContainerDied","Data":"3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926"} Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.963659 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpgcx" event={"ID":"39961b6a-a8e8-41f4-b144-0c10ced9af89","Type":"ContainerDied","Data":"18ab06394aaf40974839d876bf53e42d9dcac4f9ea76ffd5b1755aabc469aa9e"} Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.963707 4772 scope.go:117] "RemoveContainer" containerID="3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.990387 4772 scope.go:117] "RemoveContainer" containerID="3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78" Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.993682 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpgcx"] Nov 22 11:26:08 crc kubenswrapper[4772]: I1122 11:26:08.998594 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gpgcx"] Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.032277 4772 scope.go:117] "RemoveContainer" containerID="e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.050157 4772 scope.go:117] "RemoveContainer" containerID="3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926" Nov 22 11:26:09 crc kubenswrapper[4772]: E1122 11:26:09.050620 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926\": container with ID starting with 3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926 not found: ID does not exist" containerID="3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.050661 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926"} err="failed to get container status \"3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926\": rpc error: code = NotFound desc = could not find container \"3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926\": container with ID starting with 3c35e697797bd1df17486d5be94d52801860e78141b96f630cec0390dc28b926 not found: ID does not exist" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.050690 4772 scope.go:117] "RemoveContainer" containerID="3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78" Nov 22 11:26:09 crc kubenswrapper[4772]: E1122 11:26:09.051015 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78\": container with ID starting with 3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78 not found: ID does not exist" containerID="3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.051068 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78"} err="failed to get container status \"3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78\": rpc error: code = NotFound desc = could not find container \"3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78\": container with ID starting with 3e7641b4db6ecc2063332a1ea335960f07b4e2a2b22cda4430cb5823e99e6c78 not found: ID does not exist" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.051089 4772 scope.go:117] "RemoveContainer" containerID="e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51" Nov 22 11:26:09 crc kubenswrapper[4772]: E1122 11:26:09.051602 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51\": container with ID starting with e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51 not found: ID does not exist" containerID="e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.051632 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51"} err="failed to get container status \"e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51\": rpc error: code = NotFound desc = could not find container \"e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51\": container with ID starting with e1c980e9313890bc98ffcf02adee67227dd5869c7e48897a9fb53a9cb03d6c51 not found: ID does not exist" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.425372 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" path="/var/lib/kubelet/pods/39961b6a-a8e8-41f4-b144-0c10ced9af89/volumes" Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.980084 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-klqb8"] Nov 22 11:26:09 crc kubenswrapper[4772]: I1122 11:26:09.981147 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-klqb8" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="registry-server" containerID="cri-o://4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4" gracePeriod=2 Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.336681 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.427725 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-catalog-content\") pod \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.427827 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-utilities\") pod \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.427864 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtt2x\" (UniqueName: \"kubernetes.io/projected/8579d4ee-ef62-4b1d-ae92-9072ac803b63-kube-api-access-wtt2x\") pod \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\" (UID: \"8579d4ee-ef62-4b1d-ae92-9072ac803b63\") " Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.428843 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-utilities" (OuterVolumeSpecName: "utilities") pod "8579d4ee-ef62-4b1d-ae92-9072ac803b63" (UID: "8579d4ee-ef62-4b1d-ae92-9072ac803b63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.433738 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8579d4ee-ef62-4b1d-ae92-9072ac803b63-kube-api-access-wtt2x" (OuterVolumeSpecName: "kube-api-access-wtt2x") pod "8579d4ee-ef62-4b1d-ae92-9072ac803b63" (UID: "8579d4ee-ef62-4b1d-ae92-9072ac803b63"). InnerVolumeSpecName "kube-api-access-wtt2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.448082 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8579d4ee-ef62-4b1d-ae92-9072ac803b63" (UID: "8579d4ee-ef62-4b1d-ae92-9072ac803b63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.529521 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.529555 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8579d4ee-ef62-4b1d-ae92-9072ac803b63-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.529565 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtt2x\" (UniqueName: \"kubernetes.io/projected/8579d4ee-ef62-4b1d-ae92-9072ac803b63-kube-api-access-wtt2x\") on node \"crc\" DevicePath \"\"" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.988698 4772 generic.go:334] "Generic (PLEG): container finished" podID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerID="4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4" exitCode=0 Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.988748 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klqb8" event={"ID":"8579d4ee-ef62-4b1d-ae92-9072ac803b63","Type":"ContainerDied","Data":"4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4"} Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.988780 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-klqb8" event={"ID":"8579d4ee-ef62-4b1d-ae92-9072ac803b63","Type":"ContainerDied","Data":"68e7eeb3b3e3427aee0e17f796f1631cfa1c997ee16afb3ca078cce48ccc4a9a"} Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.988804 4772 scope.go:117] "RemoveContainer" containerID="4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4" Nov 22 11:26:10 crc kubenswrapper[4772]: I1122 11:26:10.988914 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-klqb8" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.012844 4772 scope.go:117] "RemoveContainer" containerID="851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.046692 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-klqb8"] Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.049001 4772 scope.go:117] "RemoveContainer" containerID="2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.054522 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-klqb8"] Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.082010 4772 scope.go:117] "RemoveContainer" containerID="4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4" Nov 22 11:26:11 crc kubenswrapper[4772]: E1122 11:26:11.082594 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4\": container with ID starting with 4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4 not found: ID does not exist" containerID="4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.082651 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4"} err="failed to get container status \"4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4\": rpc error: code = NotFound desc = could not find container \"4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4\": container with ID starting with 4bc2d54992b2082dbf1a1793bd347f946cc828e192d9b588e968236fe6c1eeb4 not found: ID does not exist" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.082692 4772 scope.go:117] "RemoveContainer" containerID="851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186" Nov 22 11:26:11 crc kubenswrapper[4772]: E1122 11:26:11.083164 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186\": container with ID starting with 851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186 not found: ID does not exist" containerID="851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.083191 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186"} err="failed to get container status \"851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186\": rpc error: code = NotFound desc = could not find container \"851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186\": container with ID starting with 851a97fa0936f17908fbdc316906c42a1252e70bf83ed222ce68ee7ce128f186 not found: ID does not exist" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.083209 4772 scope.go:117] "RemoveContainer" containerID="2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13" Nov 22 11:26:11 crc kubenswrapper[4772]: E1122 11:26:11.083589 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13\": container with ID starting with 2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13 not found: ID does not exist" containerID="2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.083636 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13"} err="failed to get container status \"2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13\": rpc error: code = NotFound desc = could not find container \"2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13\": container with ID starting with 2af1034c787faf70952cc103cc777dd1d421241a738b5f5a661237fc02012d13 not found: ID does not exist" Nov 22 11:26:11 crc kubenswrapper[4772]: I1122 11:26:11.424355 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" path="/var/lib/kubelet/pods/8579d4ee-ef62-4b1d-ae92-9072ac803b63/volumes" Nov 22 11:27:01 crc kubenswrapper[4772]: I1122 11:27:01.533377 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:27:01 crc kubenswrapper[4772]: I1122 11:27:01.534088 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:27:31 crc kubenswrapper[4772]: I1122 11:27:31.533502 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:27:31 crc kubenswrapper[4772]: I1122 11:27:31.534248 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.533240 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.535198 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.535316 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.536440 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.536537 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" gracePeriod=600 Nov 22 11:28:01 crc kubenswrapper[4772]: E1122 11:28:01.662956 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.910431 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" exitCode=0 Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.910531 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4"} Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.910632 4772 scope.go:117] "RemoveContainer" containerID="a8e794cde97f20f186c6f9ba6fb2f3875c615dce4e7b795bab621b0e98f21876" Nov 22 11:28:01 crc kubenswrapper[4772]: I1122 11:28:01.911472 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:28:01 crc kubenswrapper[4772]: E1122 11:28:01.911932 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:28:17 crc kubenswrapper[4772]: I1122 11:28:17.413775 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:28:17 crc kubenswrapper[4772]: E1122 11:28:17.414896 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:28:29 crc kubenswrapper[4772]: I1122 11:28:29.413936 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:28:29 crc kubenswrapper[4772]: E1122 11:28:29.414925 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:28:40 crc kubenswrapper[4772]: I1122 11:28:40.413882 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:28:40 crc kubenswrapper[4772]: E1122 11:28:40.415195 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:28:51 crc kubenswrapper[4772]: I1122 11:28:51.421286 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:28:51 crc kubenswrapper[4772]: E1122 11:28:51.422409 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:29:06 crc kubenswrapper[4772]: I1122 11:29:06.414589 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:29:06 crc kubenswrapper[4772]: E1122 11:29:06.416402 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:29:21 crc kubenswrapper[4772]: I1122 11:29:21.417955 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:29:21 crc kubenswrapper[4772]: E1122 11:29:21.418832 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:29:36 crc kubenswrapper[4772]: I1122 11:29:36.414999 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:29:36 crc kubenswrapper[4772]: E1122 11:29:36.416112 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:29:49 crc kubenswrapper[4772]: I1122 11:29:49.414254 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:29:49 crc kubenswrapper[4772]: E1122 11:29:49.415450 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.216919 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n"] Nov 22 11:30:00 crc kubenswrapper[4772]: E1122 11:30:00.218263 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="registry-server" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218284 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="registry-server" Nov 22 11:30:00 crc kubenswrapper[4772]: E1122 11:30:00.218313 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="extract-utilities" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218322 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="extract-utilities" Nov 22 11:30:00 crc kubenswrapper[4772]: E1122 11:30:00.218342 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="registry-server" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218350 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="registry-server" Nov 22 11:30:00 crc kubenswrapper[4772]: E1122 11:30:00.218367 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="extract-utilities" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218375 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="extract-utilities" Nov 22 11:30:00 crc kubenswrapper[4772]: E1122 11:30:00.218390 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="extract-content" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218400 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="extract-content" Nov 22 11:30:00 crc kubenswrapper[4772]: E1122 11:30:00.218417 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="extract-content" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218424 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="extract-content" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218611 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8579d4ee-ef62-4b1d-ae92-9072ac803b63" containerName="registry-server" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.218635 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="39961b6a-a8e8-41f4-b144-0c10ced9af89" containerName="registry-server" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.219437 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.222235 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.230240 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.235343 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n"] Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.340742 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca8a424-737d-4d78-9817-77aa51dec7b6-secret-volume\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.341383 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca8a424-737d-4d78-9817-77aa51dec7b6-config-volume\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.341473 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsg7\" (UniqueName: \"kubernetes.io/projected/6ca8a424-737d-4d78-9817-77aa51dec7b6-kube-api-access-pjsg7\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.442940 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca8a424-737d-4d78-9817-77aa51dec7b6-secret-volume\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.443003 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca8a424-737d-4d78-9817-77aa51dec7b6-config-volume\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.443183 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsg7\" (UniqueName: \"kubernetes.io/projected/6ca8a424-737d-4d78-9817-77aa51dec7b6-kube-api-access-pjsg7\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.444297 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca8a424-737d-4d78-9817-77aa51dec7b6-config-volume\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.461919 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca8a424-737d-4d78-9817-77aa51dec7b6-secret-volume\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.465287 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsg7\" (UniqueName: \"kubernetes.io/projected/6ca8a424-737d-4d78-9817-77aa51dec7b6-kube-api-access-pjsg7\") pod \"collect-profiles-29396850-gsq8n\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.550028 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:00 crc kubenswrapper[4772]: I1122 11:30:00.996165 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n"] Nov 22 11:30:01 crc kubenswrapper[4772]: I1122 11:30:01.851040 4772 generic.go:334] "Generic (PLEG): container finished" podID="6ca8a424-737d-4d78-9817-77aa51dec7b6" containerID="334c847ea14d0fb5c58650d1a827520dc57d495fc14b24b4363ac7196cbc091f" exitCode=0 Nov 22 11:30:01 crc kubenswrapper[4772]: I1122 11:30:01.851170 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" event={"ID":"6ca8a424-737d-4d78-9817-77aa51dec7b6","Type":"ContainerDied","Data":"334c847ea14d0fb5c58650d1a827520dc57d495fc14b24b4363ac7196cbc091f"} Nov 22 11:30:01 crc kubenswrapper[4772]: I1122 11:30:01.851761 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" event={"ID":"6ca8a424-737d-4d78-9817-77aa51dec7b6","Type":"ContainerStarted","Data":"846e318a16822d3261e8af056a2544f78f56d329241f5bdf3f3bc731bc158ec2"} Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.192336 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.292833 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca8a424-737d-4d78-9817-77aa51dec7b6-secret-volume\") pod \"6ca8a424-737d-4d78-9817-77aa51dec7b6\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.293176 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca8a424-737d-4d78-9817-77aa51dec7b6-config-volume\") pod \"6ca8a424-737d-4d78-9817-77aa51dec7b6\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.293401 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjsg7\" (UniqueName: \"kubernetes.io/projected/6ca8a424-737d-4d78-9817-77aa51dec7b6-kube-api-access-pjsg7\") pod \"6ca8a424-737d-4d78-9817-77aa51dec7b6\" (UID: \"6ca8a424-737d-4d78-9817-77aa51dec7b6\") " Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.294328 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca8a424-737d-4d78-9817-77aa51dec7b6-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ca8a424-737d-4d78-9817-77aa51dec7b6" (UID: "6ca8a424-737d-4d78-9817-77aa51dec7b6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.301748 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca8a424-737d-4d78-9817-77aa51dec7b6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ca8a424-737d-4d78-9817-77aa51dec7b6" (UID: "6ca8a424-737d-4d78-9817-77aa51dec7b6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.301776 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca8a424-737d-4d78-9817-77aa51dec7b6-kube-api-access-pjsg7" (OuterVolumeSpecName: "kube-api-access-pjsg7") pod "6ca8a424-737d-4d78-9817-77aa51dec7b6" (UID: "6ca8a424-737d-4d78-9817-77aa51dec7b6"). InnerVolumeSpecName "kube-api-access-pjsg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.394882 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjsg7\" (UniqueName: \"kubernetes.io/projected/6ca8a424-737d-4d78-9817-77aa51dec7b6-kube-api-access-pjsg7\") on node \"crc\" DevicePath \"\"" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.394940 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca8a424-737d-4d78-9817-77aa51dec7b6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.394949 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca8a424-737d-4d78-9817-77aa51dec7b6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.414212 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:30:03 crc kubenswrapper[4772]: E1122 11:30:03.414433 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.874259 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" event={"ID":"6ca8a424-737d-4d78-9817-77aa51dec7b6","Type":"ContainerDied","Data":"846e318a16822d3261e8af056a2544f78f56d329241f5bdf3f3bc731bc158ec2"} Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.874313 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846e318a16822d3261e8af056a2544f78f56d329241f5bdf3f3bc731bc158ec2" Nov 22 11:30:03 crc kubenswrapper[4772]: I1122 11:30:03.874358 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n" Nov 22 11:30:04 crc kubenswrapper[4772]: I1122 11:30:04.288507 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f"] Nov 22 11:30:04 crc kubenswrapper[4772]: I1122 11:30:04.296325 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396805-ltj2f"] Nov 22 11:30:05 crc kubenswrapper[4772]: I1122 11:30:05.426723 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85e5a1e-edd8-451b-aacb-892f34171757" path="/var/lib/kubelet/pods/d85e5a1e-edd8-451b-aacb-892f34171757/volumes" Nov 22 11:30:09 crc kubenswrapper[4772]: I1122 11:30:09.785070 4772 scope.go:117] "RemoveContainer" containerID="838b7be4cc69cdf5bfeea00cbb953de82a2779858473158f652099c881ff802b" Nov 22 11:30:18 crc kubenswrapper[4772]: I1122 11:30:18.413783 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:30:18 crc kubenswrapper[4772]: E1122 11:30:18.414865 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:30:29 crc kubenswrapper[4772]: I1122 11:30:29.413723 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:30:29 crc kubenswrapper[4772]: E1122 11:30:29.414720 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:30:42 crc kubenswrapper[4772]: I1122 11:30:42.414640 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:30:42 crc kubenswrapper[4772]: E1122 11:30:42.415879 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:30:54 crc kubenswrapper[4772]: I1122 11:30:54.414171 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:30:54 crc kubenswrapper[4772]: E1122 11:30:54.415169 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:31:05 crc kubenswrapper[4772]: I1122 11:31:05.413887 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:31:05 crc kubenswrapper[4772]: E1122 11:31:05.414911 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:31:20 crc kubenswrapper[4772]: I1122 11:31:20.414267 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:31:20 crc kubenswrapper[4772]: E1122 11:31:20.419511 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:31:34 crc kubenswrapper[4772]: I1122 11:31:34.413977 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:31:34 crc kubenswrapper[4772]: E1122 11:31:34.415131 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:31:45 crc kubenswrapper[4772]: I1122 11:31:45.414311 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:31:45 crc kubenswrapper[4772]: E1122 11:31:45.420675 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:31:57 crc kubenswrapper[4772]: I1122 11:31:57.415445 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:31:57 crc kubenswrapper[4772]: E1122 11:31:57.416232 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:32:08 crc kubenswrapper[4772]: I1122 11:32:08.414566 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:32:08 crc kubenswrapper[4772]: E1122 11:32:08.416020 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:32:21 crc kubenswrapper[4772]: I1122 11:32:21.413665 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:32:21 crc kubenswrapper[4772]: E1122 11:32:21.414464 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:32:36 crc kubenswrapper[4772]: I1122 11:32:36.414942 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:32:36 crc kubenswrapper[4772]: E1122 11:32:36.416254 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:32:47 crc kubenswrapper[4772]: I1122 11:32:47.413830 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:32:47 crc kubenswrapper[4772]: E1122 11:32:47.414760 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:33:01 crc kubenswrapper[4772]: I1122 11:33:01.444799 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:33:01 crc kubenswrapper[4772]: E1122 11:33:01.446317 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:33:12 crc kubenswrapper[4772]: I1122 11:33:12.413159 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:33:13 crc kubenswrapper[4772]: I1122 11:33:13.543160 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"8b7c4918a1446f6312c1c6221a9ac7617e0ea7dfcc0e1d020d06e75747a2e64d"} Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.281850 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nrjt7"] Nov 22 11:33:28 crc kubenswrapper[4772]: E1122 11:33:28.283070 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca8a424-737d-4d78-9817-77aa51dec7b6" containerName="collect-profiles" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.283091 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca8a424-737d-4d78-9817-77aa51dec7b6" containerName="collect-profiles" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.283259 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca8a424-737d-4d78-9817-77aa51dec7b6" containerName="collect-profiles" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.284302 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.326169 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrjt7"] Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.471382 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-utilities\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.471493 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-catalog-content\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.471565 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddq7\" (UniqueName: \"kubernetes.io/projected/1457bb37-0c16-41ef-97bc-a1bf03a45254-kube-api-access-dddq7\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.574237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-catalog-content\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.574803 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddq7\" (UniqueName: \"kubernetes.io/projected/1457bb37-0c16-41ef-97bc-a1bf03a45254-kube-api-access-dddq7\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.574882 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-utilities\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.575388 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-utilities\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.575705 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-catalog-content\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.607555 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddq7\" (UniqueName: \"kubernetes.io/projected/1457bb37-0c16-41ef-97bc-a1bf03a45254-kube-api-access-dddq7\") pod \"community-operators-nrjt7\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:28 crc kubenswrapper[4772]: I1122 11:33:28.621336 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:29 crc kubenswrapper[4772]: I1122 11:33:29.151314 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrjt7"] Nov 22 11:33:29 crc kubenswrapper[4772]: W1122 11:33:29.153399 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1457bb37_0c16_41ef_97bc_a1bf03a45254.slice/crio-6b7453a529a459244e2233b2565fee3dfad3d187fbbaab5748948d2bb5e7b31a WatchSource:0}: Error finding container 6b7453a529a459244e2233b2565fee3dfad3d187fbbaab5748948d2bb5e7b31a: Status 404 returned error can't find the container with id 6b7453a529a459244e2233b2565fee3dfad3d187fbbaab5748948d2bb5e7b31a Nov 22 11:33:29 crc kubenswrapper[4772]: I1122 11:33:29.676792 4772 generic.go:334] "Generic (PLEG): container finished" podID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerID="89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35" exitCode=0 Nov 22 11:33:29 crc kubenswrapper[4772]: I1122 11:33:29.676893 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrjt7" event={"ID":"1457bb37-0c16-41ef-97bc-a1bf03a45254","Type":"ContainerDied","Data":"89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35"} Nov 22 11:33:29 crc kubenswrapper[4772]: I1122 11:33:29.676996 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrjt7" event={"ID":"1457bb37-0c16-41ef-97bc-a1bf03a45254","Type":"ContainerStarted","Data":"6b7453a529a459244e2233b2565fee3dfad3d187fbbaab5748948d2bb5e7b31a"} Nov 22 11:33:29 crc kubenswrapper[4772]: I1122 11:33:29.680654 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 11:33:30 crc kubenswrapper[4772]: I1122 11:33:30.689443 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrjt7" event={"ID":"1457bb37-0c16-41ef-97bc-a1bf03a45254","Type":"ContainerStarted","Data":"0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a"} Nov 22 11:33:31 crc kubenswrapper[4772]: I1122 11:33:31.708116 4772 generic.go:334] "Generic (PLEG): container finished" podID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerID="0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a" exitCode=0 Nov 22 11:33:31 crc kubenswrapper[4772]: I1122 11:33:31.708200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrjt7" event={"ID":"1457bb37-0c16-41ef-97bc-a1bf03a45254","Type":"ContainerDied","Data":"0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a"} Nov 22 11:33:32 crc kubenswrapper[4772]: I1122 11:33:32.723759 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrjt7" event={"ID":"1457bb37-0c16-41ef-97bc-a1bf03a45254","Type":"ContainerStarted","Data":"02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c"} Nov 22 11:33:32 crc kubenswrapper[4772]: I1122 11:33:32.750111 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nrjt7" podStartSLOduration=2.305042509 podStartE2EDuration="4.750087699s" podCreationTimestamp="2025-11-22 11:33:28 +0000 UTC" firstStartedPulling="2025-11-22 11:33:29.680396904 +0000 UTC m=+3329.919841398" lastFinishedPulling="2025-11-22 11:33:32.125442084 +0000 UTC m=+3332.364886588" observedRunningTime="2025-11-22 11:33:32.749438712 +0000 UTC m=+3332.988883206" watchObservedRunningTime="2025-11-22 11:33:32.750087699 +0000 UTC m=+3332.989532193" Nov 22 11:33:38 crc kubenswrapper[4772]: I1122 11:33:38.622419 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:38 crc kubenswrapper[4772]: I1122 11:33:38.623072 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:38 crc kubenswrapper[4772]: I1122 11:33:38.705227 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:38 crc kubenswrapper[4772]: I1122 11:33:38.834497 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:38 crc kubenswrapper[4772]: I1122 11:33:38.952607 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrjt7"] Nov 22 11:33:40 crc kubenswrapper[4772]: I1122 11:33:40.796442 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nrjt7" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="registry-server" containerID="cri-o://02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c" gracePeriod=2 Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.245476 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.310572 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-catalog-content\") pod \"1457bb37-0c16-41ef-97bc-a1bf03a45254\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.310650 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dddq7\" (UniqueName: \"kubernetes.io/projected/1457bb37-0c16-41ef-97bc-a1bf03a45254-kube-api-access-dddq7\") pod \"1457bb37-0c16-41ef-97bc-a1bf03a45254\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.310677 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-utilities\") pod \"1457bb37-0c16-41ef-97bc-a1bf03a45254\" (UID: \"1457bb37-0c16-41ef-97bc-a1bf03a45254\") " Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.311774 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-utilities" (OuterVolumeSpecName: "utilities") pod "1457bb37-0c16-41ef-97bc-a1bf03a45254" (UID: "1457bb37-0c16-41ef-97bc-a1bf03a45254"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.320624 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1457bb37-0c16-41ef-97bc-a1bf03a45254-kube-api-access-dddq7" (OuterVolumeSpecName: "kube-api-access-dddq7") pod "1457bb37-0c16-41ef-97bc-a1bf03a45254" (UID: "1457bb37-0c16-41ef-97bc-a1bf03a45254"). InnerVolumeSpecName "kube-api-access-dddq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.365157 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1457bb37-0c16-41ef-97bc-a1bf03a45254" (UID: "1457bb37-0c16-41ef-97bc-a1bf03a45254"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.412727 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.413213 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dddq7\" (UniqueName: \"kubernetes.io/projected/1457bb37-0c16-41ef-97bc-a1bf03a45254-kube-api-access-dddq7\") on node \"crc\" DevicePath \"\"" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.413331 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1457bb37-0c16-41ef-97bc-a1bf03a45254-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.806431 4772 generic.go:334] "Generic (PLEG): container finished" podID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerID="02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c" exitCode=0 Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.807208 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrjt7" event={"ID":"1457bb37-0c16-41ef-97bc-a1bf03a45254","Type":"ContainerDied","Data":"02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c"} Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.807313 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrjt7" event={"ID":"1457bb37-0c16-41ef-97bc-a1bf03a45254","Type":"ContainerDied","Data":"6b7453a529a459244e2233b2565fee3dfad3d187fbbaab5748948d2bb5e7b31a"} Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.807337 4772 scope.go:117] "RemoveContainer" containerID="02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.807408 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrjt7" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.835531 4772 scope.go:117] "RemoveContainer" containerID="0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.838843 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrjt7"] Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.847029 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nrjt7"] Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.865863 4772 scope.go:117] "RemoveContainer" containerID="89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.886398 4772 scope.go:117] "RemoveContainer" containerID="02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c" Nov 22 11:33:41 crc kubenswrapper[4772]: E1122 11:33:41.887552 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c\": container with ID starting with 02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c not found: ID does not exist" containerID="02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.888750 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c"} err="failed to get container status \"02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c\": rpc error: code = NotFound desc = could not find container \"02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c\": container with ID starting with 02ac6f1f41961aec7243436412acfc3c96631388448c7dd0e20d4f25ae67406c not found: ID does not exist" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.888831 4772 scope.go:117] "RemoveContainer" containerID="0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a" Nov 22 11:33:41 crc kubenswrapper[4772]: E1122 11:33:41.889145 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a\": container with ID starting with 0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a not found: ID does not exist" containerID="0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.889258 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a"} err="failed to get container status \"0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a\": rpc error: code = NotFound desc = could not find container \"0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a\": container with ID starting with 0a1c730fd9d9c96a91a370ddf032bd92f265dead68044025ac6e21ee447b4d8a not found: ID does not exist" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.889356 4772 scope.go:117] "RemoveContainer" containerID="89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35" Nov 22 11:33:41 crc kubenswrapper[4772]: E1122 11:33:41.890306 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35\": container with ID starting with 89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35 not found: ID does not exist" containerID="89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35" Nov 22 11:33:41 crc kubenswrapper[4772]: I1122 11:33:41.890362 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35"} err="failed to get container status \"89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35\": rpc error: code = NotFound desc = could not find container \"89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35\": container with ID starting with 89f9849e3bea9cf5d3724ba4ae6fe0f754878c7dbab824e39ab81f74799c2a35 not found: ID does not exist" Nov 22 11:33:43 crc kubenswrapper[4772]: I1122 11:33:43.425886 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" path="/var/lib/kubelet/pods/1457bb37-0c16-41ef-97bc-a1bf03a45254/volumes" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.523404 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ftrdv"] Nov 22 11:34:19 crc kubenswrapper[4772]: E1122 11:34:19.524328 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="registry-server" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.524346 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="registry-server" Nov 22 11:34:19 crc kubenswrapper[4772]: E1122 11:34:19.524367 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="extract-content" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.524373 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="extract-content" Nov 22 11:34:19 crc kubenswrapper[4772]: E1122 11:34:19.524385 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="extract-utilities" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.524391 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="extract-utilities" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.524543 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1457bb37-0c16-41ef-97bc-a1bf03a45254" containerName="registry-server" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.525548 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.553101 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftrdv"] Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.716099 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km46p\" (UniqueName: \"kubernetes.io/projected/0d4fe2d3-67a6-45d7-b465-85c75621faea-kube-api-access-km46p\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.716180 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-catalog-content\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.716389 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-utilities\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.817462 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km46p\" (UniqueName: \"kubernetes.io/projected/0d4fe2d3-67a6-45d7-b465-85c75621faea-kube-api-access-km46p\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.817870 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-catalog-content\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.817926 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-utilities\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.818577 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-utilities\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.818783 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-catalog-content\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.847492 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km46p\" (UniqueName: \"kubernetes.io/projected/0d4fe2d3-67a6-45d7-b465-85c75621faea-kube-api-access-km46p\") pod \"redhat-operators-ftrdv\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:19 crc kubenswrapper[4772]: I1122 11:34:19.852223 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:20 crc kubenswrapper[4772]: I1122 11:34:20.313418 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ftrdv"] Nov 22 11:34:20 crc kubenswrapper[4772]: W1122 11:34:20.330213 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4fe2d3_67a6_45d7_b465_85c75621faea.slice/crio-208849dd1e776520c8a102ffbefa99d06619a1028de6f8542a4ed34e9cbfaf3a WatchSource:0}: Error finding container 208849dd1e776520c8a102ffbefa99d06619a1028de6f8542a4ed34e9cbfaf3a: Status 404 returned error can't find the container with id 208849dd1e776520c8a102ffbefa99d06619a1028de6f8542a4ed34e9cbfaf3a Nov 22 11:34:21 crc kubenswrapper[4772]: I1122 11:34:21.160620 4772 generic.go:334] "Generic (PLEG): container finished" podID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerID="ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985" exitCode=0 Nov 22 11:34:21 crc kubenswrapper[4772]: I1122 11:34:21.160703 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrdv" event={"ID":"0d4fe2d3-67a6-45d7-b465-85c75621faea","Type":"ContainerDied","Data":"ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985"} Nov 22 11:34:21 crc kubenswrapper[4772]: I1122 11:34:21.161191 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrdv" event={"ID":"0d4fe2d3-67a6-45d7-b465-85c75621faea","Type":"ContainerStarted","Data":"208849dd1e776520c8a102ffbefa99d06619a1028de6f8542a4ed34e9cbfaf3a"} Nov 22 11:34:22 crc kubenswrapper[4772]: I1122 11:34:22.188302 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrdv" event={"ID":"0d4fe2d3-67a6-45d7-b465-85c75621faea","Type":"ContainerStarted","Data":"e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69"} Nov 22 11:34:23 crc kubenswrapper[4772]: I1122 11:34:23.202449 4772 generic.go:334] "Generic (PLEG): container finished" podID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerID="e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69" exitCode=0 Nov 22 11:34:23 crc kubenswrapper[4772]: I1122 11:34:23.202526 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrdv" event={"ID":"0d4fe2d3-67a6-45d7-b465-85c75621faea","Type":"ContainerDied","Data":"e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69"} Nov 22 11:34:24 crc kubenswrapper[4772]: I1122 11:34:24.218981 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrdv" event={"ID":"0d4fe2d3-67a6-45d7-b465-85c75621faea","Type":"ContainerStarted","Data":"7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73"} Nov 22 11:34:24 crc kubenswrapper[4772]: I1122 11:34:24.243281 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ftrdv" podStartSLOduration=2.796846386 podStartE2EDuration="5.24325463s" podCreationTimestamp="2025-11-22 11:34:19 +0000 UTC" firstStartedPulling="2025-11-22 11:34:21.164076178 +0000 UTC m=+3381.403520682" lastFinishedPulling="2025-11-22 11:34:23.610484432 +0000 UTC m=+3383.849928926" observedRunningTime="2025-11-22 11:34:24.238604553 +0000 UTC m=+3384.478049097" watchObservedRunningTime="2025-11-22 11:34:24.24325463 +0000 UTC m=+3384.482699134" Nov 22 11:34:29 crc kubenswrapper[4772]: I1122 11:34:29.853455 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:29 crc kubenswrapper[4772]: I1122 11:34:29.853828 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:30 crc kubenswrapper[4772]: I1122 11:34:30.900573 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ftrdv" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="registry-server" probeResult="failure" output=< Nov 22 11:34:30 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 11:34:30 crc kubenswrapper[4772]: > Nov 22 11:34:39 crc kubenswrapper[4772]: I1122 11:34:39.926169 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:40 crc kubenswrapper[4772]: I1122 11:34:40.015951 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:40 crc kubenswrapper[4772]: I1122 11:34:40.183405 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftrdv"] Nov 22 11:34:41 crc kubenswrapper[4772]: I1122 11:34:41.394518 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ftrdv" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="registry-server" containerID="cri-o://7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73" gracePeriod=2 Nov 22 11:34:41 crc kubenswrapper[4772]: I1122 11:34:41.840216 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.017612 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-utilities\") pod \"0d4fe2d3-67a6-45d7-b465-85c75621faea\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.017751 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km46p\" (UniqueName: \"kubernetes.io/projected/0d4fe2d3-67a6-45d7-b465-85c75621faea-kube-api-access-km46p\") pod \"0d4fe2d3-67a6-45d7-b465-85c75621faea\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.017838 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-catalog-content\") pod \"0d4fe2d3-67a6-45d7-b465-85c75621faea\" (UID: \"0d4fe2d3-67a6-45d7-b465-85c75621faea\") " Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.019547 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-utilities" (OuterVolumeSpecName: "utilities") pod "0d4fe2d3-67a6-45d7-b465-85c75621faea" (UID: "0d4fe2d3-67a6-45d7-b465-85c75621faea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.027379 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d4fe2d3-67a6-45d7-b465-85c75621faea-kube-api-access-km46p" (OuterVolumeSpecName: "kube-api-access-km46p") pod "0d4fe2d3-67a6-45d7-b465-85c75621faea" (UID: "0d4fe2d3-67a6-45d7-b465-85c75621faea"). InnerVolumeSpecName "kube-api-access-km46p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.120289 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.120347 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km46p\" (UniqueName: \"kubernetes.io/projected/0d4fe2d3-67a6-45d7-b465-85c75621faea-kube-api-access-km46p\") on node \"crc\" DevicePath \"\"" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.140805 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d4fe2d3-67a6-45d7-b465-85c75621faea" (UID: "0d4fe2d3-67a6-45d7-b465-85c75621faea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.222416 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4fe2d3-67a6-45d7-b465-85c75621faea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.407359 4772 generic.go:334] "Generic (PLEG): container finished" podID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerID="7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73" exitCode=0 Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.407439 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrdv" event={"ID":"0d4fe2d3-67a6-45d7-b465-85c75621faea","Type":"ContainerDied","Data":"7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73"} Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.408007 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ftrdv" event={"ID":"0d4fe2d3-67a6-45d7-b465-85c75621faea","Type":"ContainerDied","Data":"208849dd1e776520c8a102ffbefa99d06619a1028de6f8542a4ed34e9cbfaf3a"} Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.408066 4772 scope.go:117] "RemoveContainer" containerID="7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.407479 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ftrdv" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.432412 4772 scope.go:117] "RemoveContainer" containerID="e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.458258 4772 scope.go:117] "RemoveContainer" containerID="ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.478179 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ftrdv"] Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.502661 4772 scope.go:117] "RemoveContainer" containerID="7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73" Nov 22 11:34:42 crc kubenswrapper[4772]: E1122 11:34:42.503393 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73\": container with ID starting with 7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73 not found: ID does not exist" containerID="7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.503454 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73"} err="failed to get container status \"7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73\": rpc error: code = NotFound desc = could not find container \"7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73\": container with ID starting with 7e90c0dc46b53cb8bdc7fe5e9acbe1e75b0c4cd9da457b32869a7669dc7d7a73 not found: ID does not exist" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.503493 4772 scope.go:117] "RemoveContainer" containerID="e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69" Nov 22 11:34:42 crc kubenswrapper[4772]: E1122 11:34:42.503846 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69\": container with ID starting with e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69 not found: ID does not exist" containerID="e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.503886 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69"} err="failed to get container status \"e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69\": rpc error: code = NotFound desc = could not find container \"e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69\": container with ID starting with e63887223ecd321ed9d54855dde499c2faf14e40b0277949d5db4ec0d8ba5d69 not found: ID does not exist" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.503917 4772 scope.go:117] "RemoveContainer" containerID="ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985" Nov 22 11:34:42 crc kubenswrapper[4772]: E1122 11:34:42.504378 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985\": container with ID starting with ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985 not found: ID does not exist" containerID="ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.504476 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985"} err="failed to get container status \"ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985\": rpc error: code = NotFound desc = could not find container \"ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985\": container with ID starting with ea493d0e5e181dda01d5145e3692066d9614b4b1ce218d3f8dd698f19fa40985 not found: ID does not exist" Nov 22 11:34:42 crc kubenswrapper[4772]: I1122 11:34:42.513187 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ftrdv"] Nov 22 11:34:43 crc kubenswrapper[4772]: I1122 11:34:43.428303 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" path="/var/lib/kubelet/pods/0d4fe2d3-67a6-45d7-b465-85c75621faea/volumes" Nov 22 11:35:31 crc kubenswrapper[4772]: I1122 11:35:31.534143 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:35:31 crc kubenswrapper[4772]: I1122 11:35:31.535311 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:36:01 crc kubenswrapper[4772]: I1122 11:36:01.533646 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:36:01 crc kubenswrapper[4772]: I1122 11:36:01.534864 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.759252 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-78n95"] Nov 22 11:36:03 crc kubenswrapper[4772]: E1122 11:36:03.760836 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="extract-utilities" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.760862 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="extract-utilities" Nov 22 11:36:03 crc kubenswrapper[4772]: E1122 11:36:03.760884 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="registry-server" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.760894 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="registry-server" Nov 22 11:36:03 crc kubenswrapper[4772]: E1122 11:36:03.760912 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="extract-content" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.760922 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="extract-content" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.762997 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d4fe2d3-67a6-45d7-b465-85c75621faea" containerName="registry-server" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.764356 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.767997 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78n95"] Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.919860 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-catalog-content\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.919986 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/a43a6e75-0d3d-4012-a9a8-864a39458d96-kube-api-access-4sdjd\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:03 crc kubenswrapper[4772]: I1122 11:36:03.920020 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-utilities\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.021276 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/a43a6e75-0d3d-4012-a9a8-864a39458d96-kube-api-access-4sdjd\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.021357 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-utilities\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.021387 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-catalog-content\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.021840 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-catalog-content\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.022389 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-utilities\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.049854 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/a43a6e75-0d3d-4012-a9a8-864a39458d96-kube-api-access-4sdjd\") pod \"certified-operators-78n95\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.112669 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:04 crc kubenswrapper[4772]: I1122 11:36:04.378822 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78n95"] Nov 22 11:36:05 crc kubenswrapper[4772]: I1122 11:36:05.203247 4772 generic.go:334] "Generic (PLEG): container finished" podID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerID="f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c" exitCode=0 Nov 22 11:36:05 crc kubenswrapper[4772]: I1122 11:36:05.203338 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78n95" event={"ID":"a43a6e75-0d3d-4012-a9a8-864a39458d96","Type":"ContainerDied","Data":"f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c"} Nov 22 11:36:05 crc kubenswrapper[4772]: I1122 11:36:05.203597 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78n95" event={"ID":"a43a6e75-0d3d-4012-a9a8-864a39458d96","Type":"ContainerStarted","Data":"13beee5af9367dc4510c73e45e55b7fcfd8bb7d59e00933078025d8ed1a4dca0"} Nov 22 11:36:06 crc kubenswrapper[4772]: I1122 11:36:06.220442 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78n95" event={"ID":"a43a6e75-0d3d-4012-a9a8-864a39458d96","Type":"ContainerStarted","Data":"2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23"} Nov 22 11:36:07 crc kubenswrapper[4772]: I1122 11:36:07.238173 4772 generic.go:334] "Generic (PLEG): container finished" podID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerID="2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23" exitCode=0 Nov 22 11:36:07 crc kubenswrapper[4772]: I1122 11:36:07.238239 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78n95" event={"ID":"a43a6e75-0d3d-4012-a9a8-864a39458d96","Type":"ContainerDied","Data":"2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23"} Nov 22 11:36:08 crc kubenswrapper[4772]: I1122 11:36:08.247959 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78n95" event={"ID":"a43a6e75-0d3d-4012-a9a8-864a39458d96","Type":"ContainerStarted","Data":"fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892"} Nov 22 11:36:08 crc kubenswrapper[4772]: I1122 11:36:08.301159 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-78n95" podStartSLOduration=2.811611213 podStartE2EDuration="5.30112603s" podCreationTimestamp="2025-11-22 11:36:03 +0000 UTC" firstStartedPulling="2025-11-22 11:36:05.205473104 +0000 UTC m=+3485.444917638" lastFinishedPulling="2025-11-22 11:36:07.694987961 +0000 UTC m=+3487.934432455" observedRunningTime="2025-11-22 11:36:08.278009589 +0000 UTC m=+3488.517454093" watchObservedRunningTime="2025-11-22 11:36:08.30112603 +0000 UTC m=+3488.540570534" Nov 22 11:36:14 crc kubenswrapper[4772]: I1122 11:36:14.117285 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:14 crc kubenswrapper[4772]: I1122 11:36:14.117728 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:14 crc kubenswrapper[4772]: I1122 11:36:14.165950 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:14 crc kubenswrapper[4772]: I1122 11:36:14.374124 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:14 crc kubenswrapper[4772]: I1122 11:36:14.427378 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78n95"] Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.320948 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-78n95" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="registry-server" containerID="cri-o://fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892" gracePeriod=2 Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.811441 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.846978 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-catalog-content\") pod \"a43a6e75-0d3d-4012-a9a8-864a39458d96\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.847202 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/a43a6e75-0d3d-4012-a9a8-864a39458d96-kube-api-access-4sdjd\") pod \"a43a6e75-0d3d-4012-a9a8-864a39458d96\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.847330 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-utilities\") pod \"a43a6e75-0d3d-4012-a9a8-864a39458d96\" (UID: \"a43a6e75-0d3d-4012-a9a8-864a39458d96\") " Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.850183 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-utilities" (OuterVolumeSpecName: "utilities") pod "a43a6e75-0d3d-4012-a9a8-864a39458d96" (UID: "a43a6e75-0d3d-4012-a9a8-864a39458d96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.864719 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43a6e75-0d3d-4012-a9a8-864a39458d96-kube-api-access-4sdjd" (OuterVolumeSpecName: "kube-api-access-4sdjd") pod "a43a6e75-0d3d-4012-a9a8-864a39458d96" (UID: "a43a6e75-0d3d-4012-a9a8-864a39458d96"). InnerVolumeSpecName "kube-api-access-4sdjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.932042 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a43a6e75-0d3d-4012-a9a8-864a39458d96" (UID: "a43a6e75-0d3d-4012-a9a8-864a39458d96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.950809 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sdjd\" (UniqueName: \"kubernetes.io/projected/a43a6e75-0d3d-4012-a9a8-864a39458d96-kube-api-access-4sdjd\") on node \"crc\" DevicePath \"\"" Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.950852 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:36:16 crc kubenswrapper[4772]: I1122 11:36:16.950866 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a43a6e75-0d3d-4012-a9a8-864a39458d96-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.333340 4772 generic.go:334] "Generic (PLEG): container finished" podID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerID="fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892" exitCode=0 Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.333394 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78n95" event={"ID":"a43a6e75-0d3d-4012-a9a8-864a39458d96","Type":"ContainerDied","Data":"fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892"} Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.333439 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78n95" event={"ID":"a43a6e75-0d3d-4012-a9a8-864a39458d96","Type":"ContainerDied","Data":"13beee5af9367dc4510c73e45e55b7fcfd8bb7d59e00933078025d8ed1a4dca0"} Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.333503 4772 scope.go:117] "RemoveContainer" containerID="fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.333526 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78n95" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.373064 4772 scope.go:117] "RemoveContainer" containerID="2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.385405 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78n95"] Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.390156 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-78n95"] Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.408033 4772 scope.go:117] "RemoveContainer" containerID="f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.425927 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" path="/var/lib/kubelet/pods/a43a6e75-0d3d-4012-a9a8-864a39458d96/volumes" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.445577 4772 scope.go:117] "RemoveContainer" containerID="fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892" Nov 22 11:36:17 crc kubenswrapper[4772]: E1122 11:36:17.446992 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892\": container with ID starting with fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892 not found: ID does not exist" containerID="fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.448333 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892"} err="failed to get container status \"fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892\": rpc error: code = NotFound desc = could not find container \"fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892\": container with ID starting with fdaafc6e00ad1686460fa85d50d202d7d588ba01f45115521b8212161b764892 not found: ID does not exist" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.448386 4772 scope.go:117] "RemoveContainer" containerID="2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23" Nov 22 11:36:17 crc kubenswrapper[4772]: E1122 11:36:17.449198 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23\": container with ID starting with 2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23 not found: ID does not exist" containerID="2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.449323 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23"} err="failed to get container status \"2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23\": rpc error: code = NotFound desc = could not find container \"2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23\": container with ID starting with 2850f613d10b97ed45db88e9215a2eb85d7b7e8430381e57f8db4f906548de23 not found: ID does not exist" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.449431 4772 scope.go:117] "RemoveContainer" containerID="f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c" Nov 22 11:36:17 crc kubenswrapper[4772]: E1122 11:36:17.449829 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c\": container with ID starting with f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c not found: ID does not exist" containerID="f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c" Nov 22 11:36:17 crc kubenswrapper[4772]: I1122 11:36:17.450016 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c"} err="failed to get container status \"f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c\": rpc error: code = NotFound desc = could not find container \"f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c\": container with ID starting with f37417997b36a760ac4514cccb03a319576b5267bc1ea6553f800267d50c165c not found: ID does not exist" Nov 22 11:36:31 crc kubenswrapper[4772]: I1122 11:36:31.533444 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:36:31 crc kubenswrapper[4772]: I1122 11:36:31.534228 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:36:31 crc kubenswrapper[4772]: I1122 11:36:31.534325 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:36:31 crc kubenswrapper[4772]: I1122 11:36:31.535114 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b7c4918a1446f6312c1c6221a9ac7617e0ea7dfcc0e1d020d06e75747a2e64d"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:36:31 crc kubenswrapper[4772]: I1122 11:36:31.535177 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://8b7c4918a1446f6312c1c6221a9ac7617e0ea7dfcc0e1d020d06e75747a2e64d" gracePeriod=600 Nov 22 11:36:32 crc kubenswrapper[4772]: I1122 11:36:32.482568 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="8b7c4918a1446f6312c1c6221a9ac7617e0ea7dfcc0e1d020d06e75747a2e64d" exitCode=0 Nov 22 11:36:32 crc kubenswrapper[4772]: I1122 11:36:32.483181 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"8b7c4918a1446f6312c1c6221a9ac7617e0ea7dfcc0e1d020d06e75747a2e64d"} Nov 22 11:36:32 crc kubenswrapper[4772]: I1122 11:36:32.483228 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c"} Nov 22 11:36:32 crc kubenswrapper[4772]: I1122 11:36:32.483254 4772 scope.go:117] "RemoveContainer" containerID="94c5b79944aa5d164e7f0bc3faf4242a108ea17bc3cdb7a2b82a3165e34756a4" Nov 22 11:38:31 crc kubenswrapper[4772]: I1122 11:38:31.533879 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:38:31 crc kubenswrapper[4772]: I1122 11:38:31.534756 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.474595 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-js8m4"] Nov 22 11:38:32 crc kubenswrapper[4772]: E1122 11:38:32.475104 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="extract-content" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.475133 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="extract-content" Nov 22 11:38:32 crc kubenswrapper[4772]: E1122 11:38:32.475161 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="registry-server" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.475172 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="registry-server" Nov 22 11:38:32 crc kubenswrapper[4772]: E1122 11:38:32.475198 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="extract-utilities" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.475207 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="extract-utilities" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.475436 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43a6e75-0d3d-4012-a9a8-864a39458d96" containerName="registry-server" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.476949 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.485150 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-js8m4"] Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.618098 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-utilities\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.618191 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbqwd\" (UniqueName: \"kubernetes.io/projected/41098796-75ca-4641-b273-8e35669f2ee7-kube-api-access-mbqwd\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.618469 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-catalog-content\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.720037 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-catalog-content\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.720216 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-utilities\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.720287 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbqwd\" (UniqueName: \"kubernetes.io/projected/41098796-75ca-4641-b273-8e35669f2ee7-kube-api-access-mbqwd\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.720770 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-catalog-content\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.720880 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-utilities\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.743807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbqwd\" (UniqueName: \"kubernetes.io/projected/41098796-75ca-4641-b273-8e35669f2ee7-kube-api-access-mbqwd\") pod \"redhat-marketplace-js8m4\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:32 crc kubenswrapper[4772]: I1122 11:38:32.805261 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:33 crc kubenswrapper[4772]: I1122 11:38:33.154783 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-js8m4"] Nov 22 11:38:33 crc kubenswrapper[4772]: I1122 11:38:33.580989 4772 generic.go:334] "Generic (PLEG): container finished" podID="41098796-75ca-4641-b273-8e35669f2ee7" containerID="fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7" exitCode=0 Nov 22 11:38:33 crc kubenswrapper[4772]: I1122 11:38:33.581594 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-js8m4" event={"ID":"41098796-75ca-4641-b273-8e35669f2ee7","Type":"ContainerDied","Data":"fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7"} Nov 22 11:38:33 crc kubenswrapper[4772]: I1122 11:38:33.581637 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-js8m4" event={"ID":"41098796-75ca-4641-b273-8e35669f2ee7","Type":"ContainerStarted","Data":"9799dce5e580be09a545405d248d2ce878698be32bfb01e8a0312711ebfdd814"} Nov 22 11:38:33 crc kubenswrapper[4772]: I1122 11:38:33.583633 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 11:38:34 crc kubenswrapper[4772]: I1122 11:38:34.594063 4772 generic.go:334] "Generic (PLEG): container finished" podID="41098796-75ca-4641-b273-8e35669f2ee7" containerID="578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb" exitCode=0 Nov 22 11:38:34 crc kubenswrapper[4772]: I1122 11:38:34.594293 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-js8m4" event={"ID":"41098796-75ca-4641-b273-8e35669f2ee7","Type":"ContainerDied","Data":"578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb"} Nov 22 11:38:35 crc kubenswrapper[4772]: I1122 11:38:35.608126 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-js8m4" event={"ID":"41098796-75ca-4641-b273-8e35669f2ee7","Type":"ContainerStarted","Data":"55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c"} Nov 22 11:38:35 crc kubenswrapper[4772]: I1122 11:38:35.631370 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-js8m4" podStartSLOduration=2.199594313 podStartE2EDuration="3.631345608s" podCreationTimestamp="2025-11-22 11:38:32 +0000 UTC" firstStartedPulling="2025-11-22 11:38:33.583255238 +0000 UTC m=+3633.822699732" lastFinishedPulling="2025-11-22 11:38:35.015006533 +0000 UTC m=+3635.254451027" observedRunningTime="2025-11-22 11:38:35.625800819 +0000 UTC m=+3635.865245333" watchObservedRunningTime="2025-11-22 11:38:35.631345608 +0000 UTC m=+3635.870790112" Nov 22 11:38:42 crc kubenswrapper[4772]: I1122 11:38:42.805904 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:42 crc kubenswrapper[4772]: I1122 11:38:42.806730 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:42 crc kubenswrapper[4772]: I1122 11:38:42.857872 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:43 crc kubenswrapper[4772]: I1122 11:38:43.725191 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:43 crc kubenswrapper[4772]: I1122 11:38:43.778756 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-js8m4"] Nov 22 11:38:45 crc kubenswrapper[4772]: I1122 11:38:45.700241 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-js8m4" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="registry-server" containerID="cri-o://55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c" gracePeriod=2 Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.149322 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.256679 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-utilities\") pod \"41098796-75ca-4641-b273-8e35669f2ee7\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.256768 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-catalog-content\") pod \"41098796-75ca-4641-b273-8e35669f2ee7\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.256875 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbqwd\" (UniqueName: \"kubernetes.io/projected/41098796-75ca-4641-b273-8e35669f2ee7-kube-api-access-mbqwd\") pod \"41098796-75ca-4641-b273-8e35669f2ee7\" (UID: \"41098796-75ca-4641-b273-8e35669f2ee7\") " Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.257927 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-utilities" (OuterVolumeSpecName: "utilities") pod "41098796-75ca-4641-b273-8e35669f2ee7" (UID: "41098796-75ca-4641-b273-8e35669f2ee7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.264869 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41098796-75ca-4641-b273-8e35669f2ee7-kube-api-access-mbqwd" (OuterVolumeSpecName: "kube-api-access-mbqwd") pod "41098796-75ca-4641-b273-8e35669f2ee7" (UID: "41098796-75ca-4641-b273-8e35669f2ee7"). InnerVolumeSpecName "kube-api-access-mbqwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.277221 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41098796-75ca-4641-b273-8e35669f2ee7" (UID: "41098796-75ca-4641-b273-8e35669f2ee7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.358353 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.358399 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41098796-75ca-4641-b273-8e35669f2ee7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.358417 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbqwd\" (UniqueName: \"kubernetes.io/projected/41098796-75ca-4641-b273-8e35669f2ee7-kube-api-access-mbqwd\") on node \"crc\" DevicePath \"\"" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.710963 4772 generic.go:334] "Generic (PLEG): container finished" podID="41098796-75ca-4641-b273-8e35669f2ee7" containerID="55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c" exitCode=0 Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.711067 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-js8m4" event={"ID":"41098796-75ca-4641-b273-8e35669f2ee7","Type":"ContainerDied","Data":"55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c"} Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.711112 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-js8m4" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.711138 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-js8m4" event={"ID":"41098796-75ca-4641-b273-8e35669f2ee7","Type":"ContainerDied","Data":"9799dce5e580be09a545405d248d2ce878698be32bfb01e8a0312711ebfdd814"} Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.711163 4772 scope.go:117] "RemoveContainer" containerID="55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.740203 4772 scope.go:117] "RemoveContainer" containerID="578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.770154 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-js8m4"] Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.775059 4772 scope.go:117] "RemoveContainer" containerID="fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.776110 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-js8m4"] Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.810696 4772 scope.go:117] "RemoveContainer" containerID="55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c" Nov 22 11:38:46 crc kubenswrapper[4772]: E1122 11:38:46.811285 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c\": container with ID starting with 55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c not found: ID does not exist" containerID="55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.811353 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c"} err="failed to get container status \"55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c\": rpc error: code = NotFound desc = could not find container \"55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c\": container with ID starting with 55304b4699a54d3b22b12e3dd9b74871d18ab195f5a97f2c0ef75214c1beb98c not found: ID does not exist" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.811405 4772 scope.go:117] "RemoveContainer" containerID="578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb" Nov 22 11:38:46 crc kubenswrapper[4772]: E1122 11:38:46.812317 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb\": container with ID starting with 578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb not found: ID does not exist" containerID="578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.812360 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb"} err="failed to get container status \"578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb\": rpc error: code = NotFound desc = could not find container \"578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb\": container with ID starting with 578519e316e510ba97cc8f6b98d4b2aa3b2e447a1d9368b8f7f5d738388597fb not found: ID does not exist" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.812388 4772 scope.go:117] "RemoveContainer" containerID="fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7" Nov 22 11:38:46 crc kubenswrapper[4772]: E1122 11:38:46.813219 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7\": container with ID starting with fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7 not found: ID does not exist" containerID="fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7" Nov 22 11:38:46 crc kubenswrapper[4772]: I1122 11:38:46.813264 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7"} err="failed to get container status \"fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7\": rpc error: code = NotFound desc = could not find container \"fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7\": container with ID starting with fc7178bd18a491c8e544eac11ce345d8c09f1f4d404c65640f44ea51c2f7acc7 not found: ID does not exist" Nov 22 11:38:47 crc kubenswrapper[4772]: I1122 11:38:47.432022 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41098796-75ca-4641-b273-8e35669f2ee7" path="/var/lib/kubelet/pods/41098796-75ca-4641-b273-8e35669f2ee7/volumes" Nov 22 11:39:01 crc kubenswrapper[4772]: I1122 11:39:01.532973 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:39:01 crc kubenswrapper[4772]: I1122 11:39:01.534201 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:39:31 crc kubenswrapper[4772]: I1122 11:39:31.533653 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:39:31 crc kubenswrapper[4772]: I1122 11:39:31.535242 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:39:31 crc kubenswrapper[4772]: I1122 11:39:31.535314 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:39:31 crc kubenswrapper[4772]: I1122 11:39:31.536095 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:39:31 crc kubenswrapper[4772]: I1122 11:39:31.536173 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" gracePeriod=600 Nov 22 11:39:31 crc kubenswrapper[4772]: E1122 11:39:31.673549 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:39:32 crc kubenswrapper[4772]: I1122 11:39:32.148508 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" exitCode=0 Nov 22 11:39:32 crc kubenswrapper[4772]: I1122 11:39:32.148564 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c"} Nov 22 11:39:32 crc kubenswrapper[4772]: I1122 11:39:32.148610 4772 scope.go:117] "RemoveContainer" containerID="8b7c4918a1446f6312c1c6221a9ac7617e0ea7dfcc0e1d020d06e75747a2e64d" Nov 22 11:39:32 crc kubenswrapper[4772]: I1122 11:39:32.149757 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:39:32 crc kubenswrapper[4772]: E1122 11:39:32.150255 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:39:43 crc kubenswrapper[4772]: I1122 11:39:43.414261 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:39:43 crc kubenswrapper[4772]: E1122 11:39:43.415395 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:39:54 crc kubenswrapper[4772]: I1122 11:39:54.414538 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:39:54 crc kubenswrapper[4772]: E1122 11:39:54.416034 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:40:07 crc kubenswrapper[4772]: I1122 11:40:07.413735 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:40:07 crc kubenswrapper[4772]: E1122 11:40:07.414702 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:40:22 crc kubenswrapper[4772]: I1122 11:40:22.414087 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:40:22 crc kubenswrapper[4772]: E1122 11:40:22.415266 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:40:34 crc kubenswrapper[4772]: I1122 11:40:34.414034 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:40:34 crc kubenswrapper[4772]: E1122 11:40:34.415538 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:40:47 crc kubenswrapper[4772]: I1122 11:40:47.414124 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:40:47 crc kubenswrapper[4772]: E1122 11:40:47.415918 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:41:00 crc kubenswrapper[4772]: I1122 11:41:00.414416 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:41:00 crc kubenswrapper[4772]: E1122 11:41:00.415765 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:41:11 crc kubenswrapper[4772]: I1122 11:41:11.434466 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:41:11 crc kubenswrapper[4772]: E1122 11:41:11.435656 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:41:24 crc kubenswrapper[4772]: I1122 11:41:24.415268 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:41:24 crc kubenswrapper[4772]: E1122 11:41:24.416466 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:41:39 crc kubenswrapper[4772]: I1122 11:41:39.413630 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:41:39 crc kubenswrapper[4772]: E1122 11:41:39.414496 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:41:52 crc kubenswrapper[4772]: I1122 11:41:52.414209 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:41:52 crc kubenswrapper[4772]: E1122 11:41:52.415559 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:42:04 crc kubenswrapper[4772]: I1122 11:42:04.413115 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:42:04 crc kubenswrapper[4772]: E1122 11:42:04.413973 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:42:15 crc kubenswrapper[4772]: I1122 11:42:15.414564 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:42:15 crc kubenswrapper[4772]: E1122 11:42:15.415716 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:42:29 crc kubenswrapper[4772]: I1122 11:42:29.413514 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:42:29 crc kubenswrapper[4772]: E1122 11:42:29.414397 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:42:44 crc kubenswrapper[4772]: I1122 11:42:44.414168 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:42:44 crc kubenswrapper[4772]: E1122 11:42:44.415529 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:42:58 crc kubenswrapper[4772]: I1122 11:42:58.414092 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:42:58 crc kubenswrapper[4772]: E1122 11:42:58.414995 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:43:10 crc kubenswrapper[4772]: I1122 11:43:10.413939 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:43:10 crc kubenswrapper[4772]: E1122 11:43:10.415353 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:43:21 crc kubenswrapper[4772]: I1122 11:43:21.423419 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:43:21 crc kubenswrapper[4772]: E1122 11:43:21.425030 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:43:32 crc kubenswrapper[4772]: I1122 11:43:32.414820 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:43:32 crc kubenswrapper[4772]: E1122 11:43:32.416429 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:43:44 crc kubenswrapper[4772]: I1122 11:43:44.413762 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:43:44 crc kubenswrapper[4772]: E1122 11:43:44.414347 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:43:51 crc kubenswrapper[4772]: I1122 11:43:51.986991 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k9vwk"] Nov 22 11:43:51 crc kubenswrapper[4772]: E1122 11:43:51.988071 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="registry-server" Nov 22 11:43:51 crc kubenswrapper[4772]: I1122 11:43:51.988091 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="registry-server" Nov 22 11:43:51 crc kubenswrapper[4772]: E1122 11:43:51.988116 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="extract-content" Nov 22 11:43:51 crc kubenswrapper[4772]: I1122 11:43:51.988125 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="extract-content" Nov 22 11:43:51 crc kubenswrapper[4772]: E1122 11:43:51.988157 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="extract-utilities" Nov 22 11:43:51 crc kubenswrapper[4772]: I1122 11:43:51.988167 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="extract-utilities" Nov 22 11:43:51 crc kubenswrapper[4772]: I1122 11:43:51.988373 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="41098796-75ca-4641-b273-8e35669f2ee7" containerName="registry-server" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.015246 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.018442 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9vwk"] Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.129524 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-catalog-content\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.129614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-utilities\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.129667 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvqt\" (UniqueName: \"kubernetes.io/projected/87d15947-052f-4268-b002-a2d7afed5886-kube-api-access-5xvqt\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.231265 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-catalog-content\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.231327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-utilities\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.231363 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xvqt\" (UniqueName: \"kubernetes.io/projected/87d15947-052f-4268-b002-a2d7afed5886-kube-api-access-5xvqt\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.232371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-catalog-content\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.232414 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-utilities\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.254282 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xvqt\" (UniqueName: \"kubernetes.io/projected/87d15947-052f-4268-b002-a2d7afed5886-kube-api-access-5xvqt\") pod \"community-operators-k9vwk\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.342698 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:43:52 crc kubenswrapper[4772]: I1122 11:43:52.653550 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9vwk"] Nov 22 11:43:53 crc kubenswrapper[4772]: I1122 11:43:53.543521 4772 generic.go:334] "Generic (PLEG): container finished" podID="87d15947-052f-4268-b002-a2d7afed5886" containerID="67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352" exitCode=0 Nov 22 11:43:53 crc kubenswrapper[4772]: I1122 11:43:53.543593 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9vwk" event={"ID":"87d15947-052f-4268-b002-a2d7afed5886","Type":"ContainerDied","Data":"67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352"} Nov 22 11:43:53 crc kubenswrapper[4772]: I1122 11:43:53.543866 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9vwk" event={"ID":"87d15947-052f-4268-b002-a2d7afed5886","Type":"ContainerStarted","Data":"cd8be2679b9adf4f68ac9af733f046928b1b9a588cdd44f56606552b0207aeeb"} Nov 22 11:43:53 crc kubenswrapper[4772]: I1122 11:43:53.545432 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 11:43:54 crc kubenswrapper[4772]: I1122 11:43:54.552391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9vwk" event={"ID":"87d15947-052f-4268-b002-a2d7afed5886","Type":"ContainerStarted","Data":"90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485"} Nov 22 11:43:55 crc kubenswrapper[4772]: I1122 11:43:55.565966 4772 generic.go:334] "Generic (PLEG): container finished" podID="87d15947-052f-4268-b002-a2d7afed5886" containerID="90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485" exitCode=0 Nov 22 11:43:55 crc kubenswrapper[4772]: I1122 11:43:55.566038 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9vwk" event={"ID":"87d15947-052f-4268-b002-a2d7afed5886","Type":"ContainerDied","Data":"90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485"} Nov 22 11:43:56 crc kubenswrapper[4772]: I1122 11:43:56.413423 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:43:56 crc kubenswrapper[4772]: E1122 11:43:56.414181 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:43:56 crc kubenswrapper[4772]: I1122 11:43:56.577975 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9vwk" event={"ID":"87d15947-052f-4268-b002-a2d7afed5886","Type":"ContainerStarted","Data":"89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800"} Nov 22 11:43:56 crc kubenswrapper[4772]: I1122 11:43:56.603823 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k9vwk" podStartSLOduration=3.135266607 podStartE2EDuration="5.60378857s" podCreationTimestamp="2025-11-22 11:43:51 +0000 UTC" firstStartedPulling="2025-11-22 11:43:53.545188044 +0000 UTC m=+3953.784632538" lastFinishedPulling="2025-11-22 11:43:56.013709987 +0000 UTC m=+3956.253154501" observedRunningTime="2025-11-22 11:43:56.602490658 +0000 UTC m=+3956.841935192" watchObservedRunningTime="2025-11-22 11:43:56.60378857 +0000 UTC m=+3956.843233094" Nov 22 11:44:02 crc kubenswrapper[4772]: I1122 11:44:02.343515 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:44:02 crc kubenswrapper[4772]: I1122 11:44:02.343896 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:44:02 crc kubenswrapper[4772]: I1122 11:44:02.412973 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:44:02 crc kubenswrapper[4772]: I1122 11:44:02.716201 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:44:02 crc kubenswrapper[4772]: I1122 11:44:02.790181 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9vwk"] Nov 22 11:44:04 crc kubenswrapper[4772]: I1122 11:44:04.650644 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k9vwk" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="registry-server" containerID="cri-o://89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800" gracePeriod=2 Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.109501 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.178633 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xvqt\" (UniqueName: \"kubernetes.io/projected/87d15947-052f-4268-b002-a2d7afed5886-kube-api-access-5xvqt\") pod \"87d15947-052f-4268-b002-a2d7afed5886\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.179185 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-catalog-content\") pod \"87d15947-052f-4268-b002-a2d7afed5886\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.179334 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-utilities\") pod \"87d15947-052f-4268-b002-a2d7afed5886\" (UID: \"87d15947-052f-4268-b002-a2d7afed5886\") " Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.180530 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-utilities" (OuterVolumeSpecName: "utilities") pod "87d15947-052f-4268-b002-a2d7afed5886" (UID: "87d15947-052f-4268-b002-a2d7afed5886"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.187989 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d15947-052f-4268-b002-a2d7afed5886-kube-api-access-5xvqt" (OuterVolumeSpecName: "kube-api-access-5xvqt") pod "87d15947-052f-4268-b002-a2d7afed5886" (UID: "87d15947-052f-4268-b002-a2d7afed5886"). InnerVolumeSpecName "kube-api-access-5xvqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.281641 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xvqt\" (UniqueName: \"kubernetes.io/projected/87d15947-052f-4268-b002-a2d7afed5886-kube-api-access-5xvqt\") on node \"crc\" DevicePath \"\"" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.281686 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.663419 4772 generic.go:334] "Generic (PLEG): container finished" podID="87d15947-052f-4268-b002-a2d7afed5886" containerID="89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800" exitCode=0 Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.663509 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9vwk" event={"ID":"87d15947-052f-4268-b002-a2d7afed5886","Type":"ContainerDied","Data":"89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800"} Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.663555 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9vwk" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.664353 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9vwk" event={"ID":"87d15947-052f-4268-b002-a2d7afed5886","Type":"ContainerDied","Data":"cd8be2679b9adf4f68ac9af733f046928b1b9a588cdd44f56606552b0207aeeb"} Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.664494 4772 scope.go:117] "RemoveContainer" containerID="89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.688533 4772 scope.go:117] "RemoveContainer" containerID="90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.710318 4772 scope.go:117] "RemoveContainer" containerID="67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.725062 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87d15947-052f-4268-b002-a2d7afed5886" (UID: "87d15947-052f-4268-b002-a2d7afed5886"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.735353 4772 scope.go:117] "RemoveContainer" containerID="89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800" Nov 22 11:44:05 crc kubenswrapper[4772]: E1122 11:44:05.736101 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800\": container with ID starting with 89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800 not found: ID does not exist" containerID="89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.736161 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800"} err="failed to get container status \"89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800\": rpc error: code = NotFound desc = could not find container \"89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800\": container with ID starting with 89064129386c048ba5e1cb78008ea5ba683ff4b96432bbb8275932a2343ab800 not found: ID does not exist" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.736204 4772 scope.go:117] "RemoveContainer" containerID="90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485" Nov 22 11:44:05 crc kubenswrapper[4772]: E1122 11:44:05.737239 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485\": container with ID starting with 90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485 not found: ID does not exist" containerID="90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.737285 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485"} err="failed to get container status \"90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485\": rpc error: code = NotFound desc = could not find container \"90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485\": container with ID starting with 90bb72095865f65a2fa6b21a8f6c7f45f75b8b7b307c1db119f2cefb25db9485 not found: ID does not exist" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.737319 4772 scope.go:117] "RemoveContainer" containerID="67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352" Nov 22 11:44:05 crc kubenswrapper[4772]: E1122 11:44:05.737644 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352\": container with ID starting with 67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352 not found: ID does not exist" containerID="67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.737704 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352"} err="failed to get container status \"67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352\": rpc error: code = NotFound desc = could not find container \"67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352\": container with ID starting with 67e26cae26618838dc59c975d70f2ef5557f71105c4611847262493dabac3352 not found: ID does not exist" Nov 22 11:44:05 crc kubenswrapper[4772]: I1122 11:44:05.791313 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d15947-052f-4268-b002-a2d7afed5886-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:44:06 crc kubenswrapper[4772]: I1122 11:44:06.022839 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9vwk"] Nov 22 11:44:06 crc kubenswrapper[4772]: I1122 11:44:06.029385 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k9vwk"] Nov 22 11:44:07 crc kubenswrapper[4772]: I1122 11:44:07.413838 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:44:07 crc kubenswrapper[4772]: E1122 11:44:07.416137 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:44:07 crc kubenswrapper[4772]: I1122 11:44:07.423364 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d15947-052f-4268-b002-a2d7afed5886" path="/var/lib/kubelet/pods/87d15947-052f-4268-b002-a2d7afed5886/volumes" Nov 22 11:44:22 crc kubenswrapper[4772]: I1122 11:44:22.413721 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:44:22 crc kubenswrapper[4772]: E1122 11:44:22.415027 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:44:34 crc kubenswrapper[4772]: I1122 11:44:34.414580 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:44:34 crc kubenswrapper[4772]: I1122 11:44:34.956644 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"91a958295c5b64904032ec48e351507c4fa39af68645da937535bcafd2900839"} Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.160694 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw"] Nov 22 11:45:00 crc kubenswrapper[4772]: E1122 11:45:00.162105 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="registry-server" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.162126 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="registry-server" Nov 22 11:45:00 crc kubenswrapper[4772]: E1122 11:45:00.162147 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="extract-content" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.162154 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="extract-content" Nov 22 11:45:00 crc kubenswrapper[4772]: E1122 11:45:00.162174 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="extract-utilities" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.162183 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="extract-utilities" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.162383 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d15947-052f-4268-b002-a2d7afed5886" containerName="registry-server" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.163422 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.166808 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.167504 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.169805 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw"] Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.316120 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7hs\" (UniqueName: \"kubernetes.io/projected/58350978-93aa-401c-8930-a3c1b2550917-kube-api-access-pq7hs\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.317152 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58350978-93aa-401c-8930-a3c1b2550917-config-volume\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.317310 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58350978-93aa-401c-8930-a3c1b2550917-secret-volume\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.419362 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7hs\" (UniqueName: \"kubernetes.io/projected/58350978-93aa-401c-8930-a3c1b2550917-kube-api-access-pq7hs\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.419454 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58350978-93aa-401c-8930-a3c1b2550917-config-volume\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.419494 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58350978-93aa-401c-8930-a3c1b2550917-secret-volume\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.421368 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58350978-93aa-401c-8930-a3c1b2550917-config-volume\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.428389 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58350978-93aa-401c-8930-a3c1b2550917-secret-volume\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.441872 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7hs\" (UniqueName: \"kubernetes.io/projected/58350978-93aa-401c-8930-a3c1b2550917-kube-api-access-pq7hs\") pod \"collect-profiles-29396865-8fmbw\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.502270 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:00 crc kubenswrapper[4772]: I1122 11:45:00.956707 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw"] Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.208690 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" event={"ID":"58350978-93aa-401c-8930-a3c1b2550917","Type":"ContainerStarted","Data":"d574924aea20dae214b4ce48d79870e2b8fb87fe7bdea2b8b98300c05ede4d3d"} Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.208761 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" event={"ID":"58350978-93aa-401c-8930-a3c1b2550917","Type":"ContainerStarted","Data":"02bfae5387ac6975640ededa0c84a2c7688b85283ace1aa8088e2a25f0d09f05"} Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.557694 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" podStartSLOduration=1.5576581059999999 podStartE2EDuration="1.557658106s" podCreationTimestamp="2025-11-22 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:45:01.227893677 +0000 UTC m=+4021.467338171" watchObservedRunningTime="2025-11-22 11:45:01.557658106 +0000 UTC m=+4021.797102600" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.559267 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pqnqb"] Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.569834 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.580320 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqnqb"] Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.742357 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgkgj\" (UniqueName: \"kubernetes.io/projected/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-kube-api-access-xgkgj\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.742587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-utilities\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.742737 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-catalog-content\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.845198 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgkgj\" (UniqueName: \"kubernetes.io/projected/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-kube-api-access-xgkgj\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.845812 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-utilities\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.845874 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-catalog-content\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.846484 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-utilities\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.846465 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-catalog-content\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.869723 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgkgj\" (UniqueName: \"kubernetes.io/projected/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-kube-api-access-xgkgj\") pod \"redhat-operators-pqnqb\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:01 crc kubenswrapper[4772]: I1122 11:45:01.913702 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:02 crc kubenswrapper[4772]: I1122 11:45:02.221360 4772 generic.go:334] "Generic (PLEG): container finished" podID="58350978-93aa-401c-8930-a3c1b2550917" containerID="d574924aea20dae214b4ce48d79870e2b8fb87fe7bdea2b8b98300c05ede4d3d" exitCode=0 Nov 22 11:45:02 crc kubenswrapper[4772]: I1122 11:45:02.221427 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" event={"ID":"58350978-93aa-401c-8930-a3c1b2550917","Type":"ContainerDied","Data":"d574924aea20dae214b4ce48d79870e2b8fb87fe7bdea2b8b98300c05ede4d3d"} Nov 22 11:45:02 crc kubenswrapper[4772]: I1122 11:45:02.388489 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqnqb"] Nov 22 11:45:02 crc kubenswrapper[4772]: W1122 11:45:02.405078 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c51dd77_61fe_4f47_9c4f_4d29d6f9bd42.slice/crio-58b09cbd018831c0543cdb745063687448465a945ad645d66aaf04d409dec4af WatchSource:0}: Error finding container 58b09cbd018831c0543cdb745063687448465a945ad645d66aaf04d409dec4af: Status 404 returned error can't find the container with id 58b09cbd018831c0543cdb745063687448465a945ad645d66aaf04d409dec4af Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.232026 4772 generic.go:334] "Generic (PLEG): container finished" podID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerID="7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985" exitCode=0 Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.232198 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqnqb" event={"ID":"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42","Type":"ContainerDied","Data":"7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985"} Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.232245 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqnqb" event={"ID":"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42","Type":"ContainerStarted","Data":"58b09cbd018831c0543cdb745063687448465a945ad645d66aaf04d409dec4af"} Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.527012 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.678966 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58350978-93aa-401c-8930-a3c1b2550917-config-volume\") pod \"58350978-93aa-401c-8930-a3c1b2550917\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.680137 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58350978-93aa-401c-8930-a3c1b2550917-config-volume" (OuterVolumeSpecName: "config-volume") pod "58350978-93aa-401c-8930-a3c1b2550917" (UID: "58350978-93aa-401c-8930-a3c1b2550917"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.680288 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7hs\" (UniqueName: \"kubernetes.io/projected/58350978-93aa-401c-8930-a3c1b2550917-kube-api-access-pq7hs\") pod \"58350978-93aa-401c-8930-a3c1b2550917\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.681326 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58350978-93aa-401c-8930-a3c1b2550917-secret-volume\") pod \"58350978-93aa-401c-8930-a3c1b2550917\" (UID: \"58350978-93aa-401c-8930-a3c1b2550917\") " Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.682168 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58350978-93aa-401c-8930-a3c1b2550917-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.689339 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58350978-93aa-401c-8930-a3c1b2550917-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "58350978-93aa-401c-8930-a3c1b2550917" (UID: "58350978-93aa-401c-8930-a3c1b2550917"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.689392 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58350978-93aa-401c-8930-a3c1b2550917-kube-api-access-pq7hs" (OuterVolumeSpecName: "kube-api-access-pq7hs") pod "58350978-93aa-401c-8930-a3c1b2550917" (UID: "58350978-93aa-401c-8930-a3c1b2550917"). InnerVolumeSpecName "kube-api-access-pq7hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.782890 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58350978-93aa-401c-8930-a3c1b2550917-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 11:45:03 crc kubenswrapper[4772]: I1122 11:45:03.782951 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7hs\" (UniqueName: \"kubernetes.io/projected/58350978-93aa-401c-8930-a3c1b2550917-kube-api-access-pq7hs\") on node \"crc\" DevicePath \"\"" Nov 22 11:45:04 crc kubenswrapper[4772]: I1122 11:45:04.243817 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqnqb" event={"ID":"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42","Type":"ContainerStarted","Data":"d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1"} Nov 22 11:45:04 crc kubenswrapper[4772]: I1122 11:45:04.246384 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" event={"ID":"58350978-93aa-401c-8930-a3c1b2550917","Type":"ContainerDied","Data":"02bfae5387ac6975640ededa0c84a2c7688b85283ace1aa8088e2a25f0d09f05"} Nov 22 11:45:04 crc kubenswrapper[4772]: I1122 11:45:04.246453 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02bfae5387ac6975640ededa0c84a2c7688b85283ace1aa8088e2a25f0d09f05" Nov 22 11:45:04 crc kubenswrapper[4772]: I1122 11:45:04.246451 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw" Nov 22 11:45:04 crc kubenswrapper[4772]: I1122 11:45:04.314815 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf"] Nov 22 11:45:04 crc kubenswrapper[4772]: I1122 11:45:04.321140 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396820-g7jnf"] Nov 22 11:45:05 crc kubenswrapper[4772]: I1122 11:45:05.256363 4772 generic.go:334] "Generic (PLEG): container finished" podID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerID="d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1" exitCode=0 Nov 22 11:45:05 crc kubenswrapper[4772]: I1122 11:45:05.256466 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqnqb" event={"ID":"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42","Type":"ContainerDied","Data":"d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1"} Nov 22 11:45:05 crc kubenswrapper[4772]: I1122 11:45:05.425357 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75820045-4178-4355-926b-6a7f0effd0fc" path="/var/lib/kubelet/pods/75820045-4178-4355-926b-6a7f0effd0fc/volumes" Nov 22 11:45:06 crc kubenswrapper[4772]: I1122 11:45:06.272041 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqnqb" event={"ID":"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42","Type":"ContainerStarted","Data":"622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9"} Nov 22 11:45:06 crc kubenswrapper[4772]: I1122 11:45:06.305984 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pqnqb" podStartSLOduration=2.781400182 podStartE2EDuration="5.305949853s" podCreationTimestamp="2025-11-22 11:45:01 +0000 UTC" firstStartedPulling="2025-11-22 11:45:03.236121096 +0000 UTC m=+4023.475565600" lastFinishedPulling="2025-11-22 11:45:05.760670777 +0000 UTC m=+4026.000115271" observedRunningTime="2025-11-22 11:45:06.298524638 +0000 UTC m=+4026.537969192" watchObservedRunningTime="2025-11-22 11:45:06.305949853 +0000 UTC m=+4026.545394407" Nov 22 11:45:10 crc kubenswrapper[4772]: I1122 11:45:10.204847 4772 scope.go:117] "RemoveContainer" containerID="176b6886e9065cbcfbf1f7bb309324d66d3dd67e9eea3046a4ce98351804a87c" Nov 22 11:45:11 crc kubenswrapper[4772]: I1122 11:45:11.914946 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:11 crc kubenswrapper[4772]: I1122 11:45:11.917976 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:12 crc kubenswrapper[4772]: I1122 11:45:12.996348 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pqnqb" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="registry-server" probeResult="failure" output=< Nov 22 11:45:12 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 11:45:12 crc kubenswrapper[4772]: > Nov 22 11:45:22 crc kubenswrapper[4772]: I1122 11:45:22.000229 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:22 crc kubenswrapper[4772]: I1122 11:45:22.067826 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:22 crc kubenswrapper[4772]: I1122 11:45:22.251118 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqnqb"] Nov 22 11:45:23 crc kubenswrapper[4772]: I1122 11:45:23.458411 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pqnqb" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="registry-server" containerID="cri-o://622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9" gracePeriod=2 Nov 22 11:45:23 crc kubenswrapper[4772]: I1122 11:45:23.885438 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.031771 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-catalog-content\") pod \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.031923 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-utilities\") pod \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.032190 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgkgj\" (UniqueName: \"kubernetes.io/projected/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-kube-api-access-xgkgj\") pod \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\" (UID: \"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42\") " Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.033488 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-utilities" (OuterVolumeSpecName: "utilities") pod "9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" (UID: "9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.042106 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-kube-api-access-xgkgj" (OuterVolumeSpecName: "kube-api-access-xgkgj") pod "9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" (UID: "9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42"). InnerVolumeSpecName "kube-api-access-xgkgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.133974 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgkgj\" (UniqueName: \"kubernetes.io/projected/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-kube-api-access-xgkgj\") on node \"crc\" DevicePath \"\"" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.134015 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.137031 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" (UID: "9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.236456 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.469096 4772 generic.go:334] "Generic (PLEG): container finished" podID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerID="622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9" exitCode=0 Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.469156 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqnqb" event={"ID":"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42","Type":"ContainerDied","Data":"622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9"} Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.469188 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqnqb" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.469212 4772 scope.go:117] "RemoveContainer" containerID="622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.469195 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqnqb" event={"ID":"9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42","Type":"ContainerDied","Data":"58b09cbd018831c0543cdb745063687448465a945ad645d66aaf04d409dec4af"} Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.502119 4772 scope.go:117] "RemoveContainer" containerID="d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.509567 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqnqb"] Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.517394 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pqnqb"] Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.531839 4772 scope.go:117] "RemoveContainer" containerID="7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.551407 4772 scope.go:117] "RemoveContainer" containerID="622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9" Nov 22 11:45:24 crc kubenswrapper[4772]: E1122 11:45:24.551921 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9\": container with ID starting with 622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9 not found: ID does not exist" containerID="622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.551961 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9"} err="failed to get container status \"622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9\": rpc error: code = NotFound desc = could not find container \"622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9\": container with ID starting with 622e5a6a7310f2a3e4b56afc363df4912b137e075534b04766febdeedd0681d9 not found: ID does not exist" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.551994 4772 scope.go:117] "RemoveContainer" containerID="d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1" Nov 22 11:45:24 crc kubenswrapper[4772]: E1122 11:45:24.552358 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1\": container with ID starting with d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1 not found: ID does not exist" containerID="d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.552390 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1"} err="failed to get container status \"d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1\": rpc error: code = NotFound desc = could not find container \"d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1\": container with ID starting with d2fca0191d6ab990a8ce7db21771a7ad692b7c9f4ab9c4a46f167d98dcf85cd1 not found: ID does not exist" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.552405 4772 scope.go:117] "RemoveContainer" containerID="7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985" Nov 22 11:45:24 crc kubenswrapper[4772]: E1122 11:45:24.552769 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985\": container with ID starting with 7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985 not found: ID does not exist" containerID="7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985" Nov 22 11:45:24 crc kubenswrapper[4772]: I1122 11:45:24.552839 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985"} err="failed to get container status \"7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985\": rpc error: code = NotFound desc = could not find container \"7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985\": container with ID starting with 7916d7788321a53e69c8cc535f15e4ef51d80431a2ca8d40b13c36ee76bc9985 not found: ID does not exist" Nov 22 11:45:25 crc kubenswrapper[4772]: I1122 11:45:25.421804 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" path="/var/lib/kubelet/pods/9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42/volumes" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.329413 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5bv4x"] Nov 22 11:46:18 crc kubenswrapper[4772]: E1122 11:46:18.330458 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58350978-93aa-401c-8930-a3c1b2550917" containerName="collect-profiles" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.330475 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="58350978-93aa-401c-8930-a3c1b2550917" containerName="collect-profiles" Nov 22 11:46:18 crc kubenswrapper[4772]: E1122 11:46:18.330493 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="registry-server" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.330499 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="registry-server" Nov 22 11:46:18 crc kubenswrapper[4772]: E1122 11:46:18.330513 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="extract-utilities" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.330518 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="extract-utilities" Nov 22 11:46:18 crc kubenswrapper[4772]: E1122 11:46:18.330539 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="extract-content" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.330545 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="extract-content" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.330671 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="58350978-93aa-401c-8930-a3c1b2550917" containerName="collect-profiles" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.330694 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c51dd77-61fe-4f47-9c4f-4d29d6f9bd42" containerName="registry-server" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.331748 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.343477 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bv4x"] Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.347952 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-catalog-content\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.348033 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2knn\" (UniqueName: \"kubernetes.io/projected/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-kube-api-access-l2knn\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.348465 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-utilities\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.450389 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-catalog-content\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.450483 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2knn\" (UniqueName: \"kubernetes.io/projected/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-kube-api-access-l2knn\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.450605 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-utilities\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.451188 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-catalog-content\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.451448 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-utilities\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.476269 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2knn\" (UniqueName: \"kubernetes.io/projected/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-kube-api-access-l2knn\") pod \"certified-operators-5bv4x\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:18 crc kubenswrapper[4772]: I1122 11:46:18.669662 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:19 crc kubenswrapper[4772]: I1122 11:46:18.999833 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bv4x"] Nov 22 11:46:20 crc kubenswrapper[4772]: I1122 11:46:20.017559 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerID="5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830" exitCode=0 Nov 22 11:46:20 crc kubenswrapper[4772]: I1122 11:46:20.017642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bv4x" event={"ID":"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b","Type":"ContainerDied","Data":"5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830"} Nov 22 11:46:20 crc kubenswrapper[4772]: I1122 11:46:20.017986 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bv4x" event={"ID":"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b","Type":"ContainerStarted","Data":"7050e0348f6d8e5d7776208db7bcb9d4716ac83b9191f4d8c5c1092c72bc1d5e"} Nov 22 11:46:21 crc kubenswrapper[4772]: I1122 11:46:21.030740 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bv4x" event={"ID":"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b","Type":"ContainerStarted","Data":"0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a"} Nov 22 11:46:22 crc kubenswrapper[4772]: I1122 11:46:22.044217 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerID="0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a" exitCode=0 Nov 22 11:46:22 crc kubenswrapper[4772]: I1122 11:46:22.044276 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bv4x" event={"ID":"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b","Type":"ContainerDied","Data":"0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a"} Nov 22 11:46:23 crc kubenswrapper[4772]: I1122 11:46:23.055743 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bv4x" event={"ID":"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b","Type":"ContainerStarted","Data":"88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8"} Nov 22 11:46:23 crc kubenswrapper[4772]: I1122 11:46:23.082128 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5bv4x" podStartSLOduration=2.577659525 podStartE2EDuration="5.082097267s" podCreationTimestamp="2025-11-22 11:46:18 +0000 UTC" firstStartedPulling="2025-11-22 11:46:20.023456978 +0000 UTC m=+4100.262901492" lastFinishedPulling="2025-11-22 11:46:22.52789474 +0000 UTC m=+4102.767339234" observedRunningTime="2025-11-22 11:46:23.076088057 +0000 UTC m=+4103.315532571" watchObservedRunningTime="2025-11-22 11:46:23.082097267 +0000 UTC m=+4103.321541761" Nov 22 11:46:28 crc kubenswrapper[4772]: I1122 11:46:28.669746 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:28 crc kubenswrapper[4772]: I1122 11:46:28.670614 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:28 crc kubenswrapper[4772]: I1122 11:46:28.749305 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:29 crc kubenswrapper[4772]: I1122 11:46:29.167476 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:29 crc kubenswrapper[4772]: I1122 11:46:29.221007 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bv4x"] Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.132373 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5bv4x" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="registry-server" containerID="cri-o://88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8" gracePeriod=2 Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.594735 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.797184 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-catalog-content\") pod \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.797351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2knn\" (UniqueName: \"kubernetes.io/projected/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-kube-api-access-l2knn\") pod \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.797477 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-utilities\") pod \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\" (UID: \"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b\") " Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.800571 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-utilities" (OuterVolumeSpecName: "utilities") pod "2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" (UID: "2c474fa4-a2b4-4ad9-9103-1c081efc7e2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.810635 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-kube-api-access-l2knn" (OuterVolumeSpecName: "kube-api-access-l2knn") pod "2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" (UID: "2c474fa4-a2b4-4ad9-9103-1c081efc7e2b"). InnerVolumeSpecName "kube-api-access-l2knn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.868465 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" (UID: "2c474fa4-a2b4-4ad9-9103-1c081efc7e2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.899954 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.899992 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:46:31 crc kubenswrapper[4772]: I1122 11:46:31.900010 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2knn\" (UniqueName: \"kubernetes.io/projected/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b-kube-api-access-l2knn\") on node \"crc\" DevicePath \"\"" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.146586 4772 generic.go:334] "Generic (PLEG): container finished" podID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerID="88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8" exitCode=0 Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.146768 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bv4x" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.148355 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bv4x" event={"ID":"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b","Type":"ContainerDied","Data":"88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8"} Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.148524 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bv4x" event={"ID":"2c474fa4-a2b4-4ad9-9103-1c081efc7e2b","Type":"ContainerDied","Data":"7050e0348f6d8e5d7776208db7bcb9d4716ac83b9191f4d8c5c1092c72bc1d5e"} Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.148607 4772 scope.go:117] "RemoveContainer" containerID="88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.182345 4772 scope.go:117] "RemoveContainer" containerID="0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.203758 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bv4x"] Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.210697 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5bv4x"] Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.222219 4772 scope.go:117] "RemoveContainer" containerID="5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.253006 4772 scope.go:117] "RemoveContainer" containerID="88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8" Nov 22 11:46:32 crc kubenswrapper[4772]: E1122 11:46:32.253528 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8\": container with ID starting with 88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8 not found: ID does not exist" containerID="88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.253574 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8"} err="failed to get container status \"88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8\": rpc error: code = NotFound desc = could not find container \"88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8\": container with ID starting with 88a041af4c980cae4406b62d9fd0acfe2db16a36ba9a102bf3c4f401a1c65ad8 not found: ID does not exist" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.253605 4772 scope.go:117] "RemoveContainer" containerID="0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a" Nov 22 11:46:32 crc kubenswrapper[4772]: E1122 11:46:32.254275 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a\": container with ID starting with 0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a not found: ID does not exist" containerID="0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.254356 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a"} err="failed to get container status \"0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a\": rpc error: code = NotFound desc = could not find container \"0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a\": container with ID starting with 0c18194142b6f5c05eaa41f63cd4bb4f05efaebb0bc26e24fce4291b8628da6a not found: ID does not exist" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.254414 4772 scope.go:117] "RemoveContainer" containerID="5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830" Nov 22 11:46:32 crc kubenswrapper[4772]: E1122 11:46:32.254937 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830\": container with ID starting with 5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830 not found: ID does not exist" containerID="5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830" Nov 22 11:46:32 crc kubenswrapper[4772]: I1122 11:46:32.255037 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830"} err="failed to get container status \"5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830\": rpc error: code = NotFound desc = could not find container \"5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830\": container with ID starting with 5d41e3beaf797620484a0830d4f51c66b503963fbeb66acda5ff99c9666f2830 not found: ID does not exist" Nov 22 11:46:33 crc kubenswrapper[4772]: I1122 11:46:33.424235 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" path="/var/lib/kubelet/pods/2c474fa4-a2b4-4ad9-9103-1c081efc7e2b/volumes" Nov 22 11:47:01 crc kubenswrapper[4772]: I1122 11:47:01.534004 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:47:01 crc kubenswrapper[4772]: I1122 11:47:01.534853 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:47:31 crc kubenswrapper[4772]: I1122 11:47:31.533585 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:47:31 crc kubenswrapper[4772]: I1122 11:47:31.534257 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:48:01 crc kubenswrapper[4772]: I1122 11:48:01.533180 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:48:01 crc kubenswrapper[4772]: I1122 11:48:01.534269 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:48:01 crc kubenswrapper[4772]: I1122 11:48:01.534361 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:48:01 crc kubenswrapper[4772]: I1122 11:48:01.535655 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91a958295c5b64904032ec48e351507c4fa39af68645da937535bcafd2900839"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:48:01 crc kubenswrapper[4772]: I1122 11:48:01.535783 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://91a958295c5b64904032ec48e351507c4fa39af68645da937535bcafd2900839" gracePeriod=600 Nov 22 11:48:02 crc kubenswrapper[4772]: I1122 11:48:02.022260 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="91a958295c5b64904032ec48e351507c4fa39af68645da937535bcafd2900839" exitCode=0 Nov 22 11:48:02 crc kubenswrapper[4772]: I1122 11:48:02.022319 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"91a958295c5b64904032ec48e351507c4fa39af68645da937535bcafd2900839"} Nov 22 11:48:02 crc kubenswrapper[4772]: I1122 11:48:02.022910 4772 scope.go:117] "RemoveContainer" containerID="840a4ee2da9d5e5bbe147cfd7570643a837738629ce60c03a58d3a5bf26b9e4c" Nov 22 11:48:03 crc kubenswrapper[4772]: I1122 11:48:03.033622 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d"} Nov 22 11:50:31 crc kubenswrapper[4772]: I1122 11:50:31.533372 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:50:31 crc kubenswrapper[4772]: I1122 11:50:31.533934 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:51:01 crc kubenswrapper[4772]: I1122 11:51:01.533315 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:51:01 crc kubenswrapper[4772]: I1122 11:51:01.534268 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:51:31 crc kubenswrapper[4772]: I1122 11:51:31.532988 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:51:31 crc kubenswrapper[4772]: I1122 11:51:31.533874 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:51:31 crc kubenswrapper[4772]: I1122 11:51:31.533942 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 11:51:31 crc kubenswrapper[4772]: I1122 11:51:31.534998 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 11:51:31 crc kubenswrapper[4772]: I1122 11:51:31.535107 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" gracePeriod=600 Nov 22 11:51:31 crc kubenswrapper[4772]: E1122 11:51:31.665210 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:51:32 crc kubenswrapper[4772]: I1122 11:51:32.023611 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" exitCode=0 Nov 22 11:51:32 crc kubenswrapper[4772]: I1122 11:51:32.023695 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d"} Nov 22 11:51:32 crc kubenswrapper[4772]: I1122 11:51:32.023813 4772 scope.go:117] "RemoveContainer" containerID="91a958295c5b64904032ec48e351507c4fa39af68645da937535bcafd2900839" Nov 22 11:51:32 crc kubenswrapper[4772]: I1122 11:51:32.024973 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:51:32 crc kubenswrapper[4772]: E1122 11:51:32.025611 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:51:47 crc kubenswrapper[4772]: I1122 11:51:47.414209 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:51:47 crc kubenswrapper[4772]: E1122 11:51:47.415701 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:52:01 crc kubenswrapper[4772]: I1122 11:52:01.421391 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:52:01 crc kubenswrapper[4772]: E1122 11:52:01.422560 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:52:13 crc kubenswrapper[4772]: I1122 11:52:13.413522 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:52:13 crc kubenswrapper[4772]: E1122 11:52:13.414276 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:52:26 crc kubenswrapper[4772]: I1122 11:52:26.414100 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:52:26 crc kubenswrapper[4772]: E1122 11:52:26.416362 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:52:41 crc kubenswrapper[4772]: I1122 11:52:41.418934 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:52:41 crc kubenswrapper[4772]: E1122 11:52:41.419704 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:52:56 crc kubenswrapper[4772]: I1122 11:52:56.413700 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:52:56 crc kubenswrapper[4772]: E1122 11:52:56.414464 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.209037 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ww5kr"] Nov 22 11:53:00 crc kubenswrapper[4772]: E1122 11:53:00.209838 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="extract-utilities" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.209855 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="extract-utilities" Nov 22 11:53:00 crc kubenswrapper[4772]: E1122 11:53:00.209869 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="registry-server" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.209876 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="registry-server" Nov 22 11:53:00 crc kubenswrapper[4772]: E1122 11:53:00.209896 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="extract-content" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.209903 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="extract-content" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.210066 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c474fa4-a2b4-4ad9-9103-1c081efc7e2b" containerName="registry-server" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.211218 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.269598 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww5kr"] Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.339919 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jphn\" (UniqueName: \"kubernetes.io/projected/45e0f526-cf03-435b-a8ce-af08fbf46c90-kube-api-access-7jphn\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.339987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-utilities\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.340062 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-catalog-content\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.441733 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-catalog-content\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.441849 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jphn\" (UniqueName: \"kubernetes.io/projected/45e0f526-cf03-435b-a8ce-af08fbf46c90-kube-api-access-7jphn\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.441907 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-utilities\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.442537 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-catalog-content\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.442684 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-utilities\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.469466 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jphn\" (UniqueName: \"kubernetes.io/projected/45e0f526-cf03-435b-a8ce-af08fbf46c90-kube-api-access-7jphn\") pod \"redhat-marketplace-ww5kr\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.532850 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:00 crc kubenswrapper[4772]: I1122 11:53:00.960651 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww5kr"] Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.649617 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-swbhm"] Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.657224 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-swbhm"] Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.801463 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-pwdlg"] Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.803216 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.808414 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.808823 4772 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nxxqs" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.809015 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.811475 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.815168 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-pwdlg"] Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.877789 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57bn9\" (UniqueName: \"kubernetes.io/projected/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-kube-api-access-57bn9\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.877884 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-node-mnt\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.877936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-crc-storage\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.899721 4772 generic.go:334] "Generic (PLEG): container finished" podID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerID="60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197" exitCode=0 Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.899827 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww5kr" event={"ID":"45e0f526-cf03-435b-a8ce-af08fbf46c90","Type":"ContainerDied","Data":"60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197"} Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.899938 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww5kr" event={"ID":"45e0f526-cf03-435b-a8ce-af08fbf46c90","Type":"ContainerStarted","Data":"faff50d1dcc0efe3c0265347ff8e2c247948608f40d8c10ba3da600a0986703d"} Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.903340 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.980146 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57bn9\" (UniqueName: \"kubernetes.io/projected/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-kube-api-access-57bn9\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.980211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-node-mnt\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.980248 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-crc-storage\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.980636 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-node-mnt\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:01 crc kubenswrapper[4772]: I1122 11:53:01.981610 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-crc-storage\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:02 crc kubenswrapper[4772]: I1122 11:53:02.004484 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57bn9\" (UniqueName: \"kubernetes.io/projected/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-kube-api-access-57bn9\") pod \"crc-storage-crc-pwdlg\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:02 crc kubenswrapper[4772]: I1122 11:53:02.138904 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:02 crc kubenswrapper[4772]: I1122 11:53:02.523777 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-pwdlg"] Nov 22 11:53:02 crc kubenswrapper[4772]: W1122 11:53:02.533337 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddfda2f8_75bc_4a32_a1e6_7e24c2484558.slice/crio-1220c2fe8a7f132430ad4cb86edd6881fe87a8df7af2f7a8a33e77193dee6385 WatchSource:0}: Error finding container 1220c2fe8a7f132430ad4cb86edd6881fe87a8df7af2f7a8a33e77193dee6385: Status 404 returned error can't find the container with id 1220c2fe8a7f132430ad4cb86edd6881fe87a8df7af2f7a8a33e77193dee6385 Nov 22 11:53:02 crc kubenswrapper[4772]: I1122 11:53:02.909031 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pwdlg" event={"ID":"ddfda2f8-75bc-4a32-a1e6-7e24c2484558","Type":"ContainerStarted","Data":"1220c2fe8a7f132430ad4cb86edd6881fe87a8df7af2f7a8a33e77193dee6385"} Nov 22 11:53:02 crc kubenswrapper[4772]: I1122 11:53:02.912191 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww5kr" event={"ID":"45e0f526-cf03-435b-a8ce-af08fbf46c90","Type":"ContainerStarted","Data":"0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7"} Nov 22 11:53:03 crc kubenswrapper[4772]: I1122 11:53:03.425900 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0424d70-121b-4e88-ac19-da0f816c6625" path="/var/lib/kubelet/pods/a0424d70-121b-4e88-ac19-da0f816c6625/volumes" Nov 22 11:53:03 crc kubenswrapper[4772]: I1122 11:53:03.925976 4772 generic.go:334] "Generic (PLEG): container finished" podID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerID="0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7" exitCode=0 Nov 22 11:53:03 crc kubenswrapper[4772]: I1122 11:53:03.926097 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww5kr" event={"ID":"45e0f526-cf03-435b-a8ce-af08fbf46c90","Type":"ContainerDied","Data":"0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7"} Nov 22 11:53:03 crc kubenswrapper[4772]: I1122 11:53:03.929169 4772 generic.go:334] "Generic (PLEG): container finished" podID="ddfda2f8-75bc-4a32-a1e6-7e24c2484558" containerID="7c6bed596f1a51578cc7a885bbb88ddc69fea2097c265aca7d92017666c0f56a" exitCode=0 Nov 22 11:53:03 crc kubenswrapper[4772]: I1122 11:53:03.929217 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pwdlg" event={"ID":"ddfda2f8-75bc-4a32-a1e6-7e24c2484558","Type":"ContainerDied","Data":"7c6bed596f1a51578cc7a885bbb88ddc69fea2097c265aca7d92017666c0f56a"} Nov 22 11:53:04 crc kubenswrapper[4772]: I1122 11:53:04.943627 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww5kr" event={"ID":"45e0f526-cf03-435b-a8ce-af08fbf46c90","Type":"ContainerStarted","Data":"7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243"} Nov 22 11:53:04 crc kubenswrapper[4772]: I1122 11:53:04.976243 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ww5kr" podStartSLOduration=2.37261826 podStartE2EDuration="4.976219319s" podCreationTimestamp="2025-11-22 11:53:00 +0000 UTC" firstStartedPulling="2025-11-22 11:53:01.902882741 +0000 UTC m=+4502.142327275" lastFinishedPulling="2025-11-22 11:53:04.50648384 +0000 UTC m=+4504.745928334" observedRunningTime="2025-11-22 11:53:04.974066216 +0000 UTC m=+4505.213510710" watchObservedRunningTime="2025-11-22 11:53:04.976219319 +0000 UTC m=+4505.215663813" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.372789 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.446273 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-crc-storage\") pod \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.447334 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57bn9\" (UniqueName: \"kubernetes.io/projected/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-kube-api-access-57bn9\") pod \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.447566 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-node-mnt\") pod \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\" (UID: \"ddfda2f8-75bc-4a32-a1e6-7e24c2484558\") " Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.448005 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "ddfda2f8-75bc-4a32-a1e6-7e24c2484558" (UID: "ddfda2f8-75bc-4a32-a1e6-7e24c2484558"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.457378 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-kube-api-access-57bn9" (OuterVolumeSpecName: "kube-api-access-57bn9") pod "ddfda2f8-75bc-4a32-a1e6-7e24c2484558" (UID: "ddfda2f8-75bc-4a32-a1e6-7e24c2484558"). InnerVolumeSpecName "kube-api-access-57bn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.486064 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "ddfda2f8-75bc-4a32-a1e6-7e24c2484558" (UID: "ddfda2f8-75bc-4a32-a1e6-7e24c2484558"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.549241 4772 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.549302 4772 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.549313 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57bn9\" (UniqueName: \"kubernetes.io/projected/ddfda2f8-75bc-4a32-a1e6-7e24c2484558-kube-api-access-57bn9\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.955493 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-pwdlg" event={"ID":"ddfda2f8-75bc-4a32-a1e6-7e24c2484558","Type":"ContainerDied","Data":"1220c2fe8a7f132430ad4cb86edd6881fe87a8df7af2f7a8a33e77193dee6385"} Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.956138 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1220c2fe8a7f132430ad4cb86edd6881fe87a8df7af2f7a8a33e77193dee6385" Nov 22 11:53:05 crc kubenswrapper[4772]: I1122 11:53:05.955572 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-pwdlg" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.414266 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:53:07 crc kubenswrapper[4772]: E1122 11:53:07.415075 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.632098 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-pwdlg"] Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.638808 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-pwdlg"] Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.783638 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-bw2wx"] Nov 22 11:53:07 crc kubenswrapper[4772]: E1122 11:53:07.784118 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfda2f8-75bc-4a32-a1e6-7e24c2484558" containerName="storage" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.784140 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfda2f8-75bc-4a32-a1e6-7e24c2484558" containerName="storage" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.784404 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddfda2f8-75bc-4a32-a1e6-7e24c2484558" containerName="storage" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.785034 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.788425 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.789589 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.789626 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.789837 4772 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-nxxqs" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.798960 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-bw2wx"] Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.886200 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/43ab1952-2f0d-47a1-8daa-83efcac38e42-crc-storage\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.886286 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/43ab1952-2f0d-47a1-8daa-83efcac38e42-node-mnt\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.886326 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdj44\" (UniqueName: \"kubernetes.io/projected/43ab1952-2f0d-47a1-8daa-83efcac38e42-kube-api-access-kdj44\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.988142 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/43ab1952-2f0d-47a1-8daa-83efcac38e42-crc-storage\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.988248 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/43ab1952-2f0d-47a1-8daa-83efcac38e42-node-mnt\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.988282 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdj44\" (UniqueName: \"kubernetes.io/projected/43ab1952-2f0d-47a1-8daa-83efcac38e42-kube-api-access-kdj44\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.988678 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/43ab1952-2f0d-47a1-8daa-83efcac38e42-node-mnt\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:07 crc kubenswrapper[4772]: I1122 11:53:07.989150 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/43ab1952-2f0d-47a1-8daa-83efcac38e42-crc-storage\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:08 crc kubenswrapper[4772]: I1122 11:53:08.009072 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdj44\" (UniqueName: \"kubernetes.io/projected/43ab1952-2f0d-47a1-8daa-83efcac38e42-kube-api-access-kdj44\") pod \"crc-storage-crc-bw2wx\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:08 crc kubenswrapper[4772]: I1122 11:53:08.115394 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:08 crc kubenswrapper[4772]: I1122 11:53:08.373776 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-bw2wx"] Nov 22 11:53:08 crc kubenswrapper[4772]: I1122 11:53:08.991019 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bw2wx" event={"ID":"43ab1952-2f0d-47a1-8daa-83efcac38e42","Type":"ContainerStarted","Data":"788b6102e5f5c51f17462b6ce87d4ecf9c1a38baa0456857bce53ecb011eaaf4"} Nov 22 11:53:08 crc kubenswrapper[4772]: I1122 11:53:08.991731 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bw2wx" event={"ID":"43ab1952-2f0d-47a1-8daa-83efcac38e42","Type":"ContainerStarted","Data":"668a5dc0398bf2bf2ed3781c1b637b578a4f2f6b1cb58ef82ab31440c5b925a5"} Nov 22 11:53:09 crc kubenswrapper[4772]: I1122 11:53:09.012519 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-bw2wx" podStartSLOduration=1.584604991 podStartE2EDuration="2.012495263s" podCreationTimestamp="2025-11-22 11:53:07 +0000 UTC" firstStartedPulling="2025-11-22 11:53:08.380522398 +0000 UTC m=+4508.619966902" lastFinishedPulling="2025-11-22 11:53:08.80841268 +0000 UTC m=+4509.047857174" observedRunningTime="2025-11-22 11:53:09.006145057 +0000 UTC m=+4509.245589551" watchObservedRunningTime="2025-11-22 11:53:09.012495263 +0000 UTC m=+4509.251939757" Nov 22 11:53:09 crc kubenswrapper[4772]: I1122 11:53:09.426246 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddfda2f8-75bc-4a32-a1e6-7e24c2484558" path="/var/lib/kubelet/pods/ddfda2f8-75bc-4a32-a1e6-7e24c2484558/volumes" Nov 22 11:53:10 crc kubenswrapper[4772]: I1122 11:53:10.002765 4772 generic.go:334] "Generic (PLEG): container finished" podID="43ab1952-2f0d-47a1-8daa-83efcac38e42" containerID="788b6102e5f5c51f17462b6ce87d4ecf9c1a38baa0456857bce53ecb011eaaf4" exitCode=0 Nov 22 11:53:10 crc kubenswrapper[4772]: I1122 11:53:10.003178 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bw2wx" event={"ID":"43ab1952-2f0d-47a1-8daa-83efcac38e42","Type":"ContainerDied","Data":"788b6102e5f5c51f17462b6ce87d4ecf9c1a38baa0456857bce53ecb011eaaf4"} Nov 22 11:53:10 crc kubenswrapper[4772]: I1122 11:53:10.402728 4772 scope.go:117] "RemoveContainer" containerID="a0aedd2e55e0259cc44970c6a4851de482dcf1e56fa55e36f617f075dff78870" Nov 22 11:53:10 crc kubenswrapper[4772]: I1122 11:53:10.533978 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:10 crc kubenswrapper[4772]: I1122 11:53:10.534068 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:10 crc kubenswrapper[4772]: I1122 11:53:10.578225 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.078159 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.172826 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww5kr"] Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.388853 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.461695 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/43ab1952-2f0d-47a1-8daa-83efcac38e42-crc-storage\") pod \"43ab1952-2f0d-47a1-8daa-83efcac38e42\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.462450 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/43ab1952-2f0d-47a1-8daa-83efcac38e42-node-mnt\") pod \"43ab1952-2f0d-47a1-8daa-83efcac38e42\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.462520 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdj44\" (UniqueName: \"kubernetes.io/projected/43ab1952-2f0d-47a1-8daa-83efcac38e42-kube-api-access-kdj44\") pod \"43ab1952-2f0d-47a1-8daa-83efcac38e42\" (UID: \"43ab1952-2f0d-47a1-8daa-83efcac38e42\") " Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.462609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ab1952-2f0d-47a1-8daa-83efcac38e42-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "43ab1952-2f0d-47a1-8daa-83efcac38e42" (UID: "43ab1952-2f0d-47a1-8daa-83efcac38e42"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.463407 4772 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/43ab1952-2f0d-47a1-8daa-83efcac38e42-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.470534 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ab1952-2f0d-47a1-8daa-83efcac38e42-kube-api-access-kdj44" (OuterVolumeSpecName: "kube-api-access-kdj44") pod "43ab1952-2f0d-47a1-8daa-83efcac38e42" (UID: "43ab1952-2f0d-47a1-8daa-83efcac38e42"). InnerVolumeSpecName "kube-api-access-kdj44". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.509004 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ab1952-2f0d-47a1-8daa-83efcac38e42-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "43ab1952-2f0d-47a1-8daa-83efcac38e42" (UID: "43ab1952-2f0d-47a1-8daa-83efcac38e42"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.565035 4772 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/43ab1952-2f0d-47a1-8daa-83efcac38e42-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:11 crc kubenswrapper[4772]: I1122 11:53:11.565099 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdj44\" (UniqueName: \"kubernetes.io/projected/43ab1952-2f0d-47a1-8daa-83efcac38e42-kube-api-access-kdj44\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:12 crc kubenswrapper[4772]: I1122 11:53:12.023974 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-bw2wx" event={"ID":"43ab1952-2f0d-47a1-8daa-83efcac38e42","Type":"ContainerDied","Data":"668a5dc0398bf2bf2ed3781c1b637b578a4f2f6b1cb58ef82ab31440c5b925a5"} Nov 22 11:53:12 crc kubenswrapper[4772]: I1122 11:53:12.024058 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="668a5dc0398bf2bf2ed3781c1b637b578a4f2f6b1cb58ef82ab31440c5b925a5" Nov 22 11:53:12 crc kubenswrapper[4772]: I1122 11:53:12.024304 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-bw2wx" Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.030779 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ww5kr" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="registry-server" containerID="cri-o://7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243" gracePeriod=2 Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.483685 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.605273 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-utilities\") pod \"45e0f526-cf03-435b-a8ce-af08fbf46c90\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.605354 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-catalog-content\") pod \"45e0f526-cf03-435b-a8ce-af08fbf46c90\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.605402 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jphn\" (UniqueName: \"kubernetes.io/projected/45e0f526-cf03-435b-a8ce-af08fbf46c90-kube-api-access-7jphn\") pod \"45e0f526-cf03-435b-a8ce-af08fbf46c90\" (UID: \"45e0f526-cf03-435b-a8ce-af08fbf46c90\") " Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.606283 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-utilities" (OuterVolumeSpecName: "utilities") pod "45e0f526-cf03-435b-a8ce-af08fbf46c90" (UID: "45e0f526-cf03-435b-a8ce-af08fbf46c90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.612598 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e0f526-cf03-435b-a8ce-af08fbf46c90-kube-api-access-7jphn" (OuterVolumeSpecName: "kube-api-access-7jphn") pod "45e0f526-cf03-435b-a8ce-af08fbf46c90" (UID: "45e0f526-cf03-435b-a8ce-af08fbf46c90"). InnerVolumeSpecName "kube-api-access-7jphn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.624962 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45e0f526-cf03-435b-a8ce-af08fbf46c90" (UID: "45e0f526-cf03-435b-a8ce-af08fbf46c90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.707267 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.707319 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0f526-cf03-435b-a8ce-af08fbf46c90-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:13 crc kubenswrapper[4772]: I1122 11:53:13.707336 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jphn\" (UniqueName: \"kubernetes.io/projected/45e0f526-cf03-435b-a8ce-af08fbf46c90-kube-api-access-7jphn\") on node \"crc\" DevicePath \"\"" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.041489 4772 generic.go:334] "Generic (PLEG): container finished" podID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerID="7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243" exitCode=0 Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.041544 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww5kr" event={"ID":"45e0f526-cf03-435b-a8ce-af08fbf46c90","Type":"ContainerDied","Data":"7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243"} Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.041586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww5kr" event={"ID":"45e0f526-cf03-435b-a8ce-af08fbf46c90","Type":"ContainerDied","Data":"faff50d1dcc0efe3c0265347ff8e2c247948608f40d8c10ba3da600a0986703d"} Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.041591 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww5kr" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.041605 4772 scope.go:117] "RemoveContainer" containerID="7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.065920 4772 scope.go:117] "RemoveContainer" containerID="0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.081702 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww5kr"] Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.089860 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww5kr"] Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.103769 4772 scope.go:117] "RemoveContainer" containerID="60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.126292 4772 scope.go:117] "RemoveContainer" containerID="7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243" Nov 22 11:53:14 crc kubenswrapper[4772]: E1122 11:53:14.127154 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243\": container with ID starting with 7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243 not found: ID does not exist" containerID="7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.127216 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243"} err="failed to get container status \"7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243\": rpc error: code = NotFound desc = could not find container \"7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243\": container with ID starting with 7bd818e8e62516cc097d76e9955c973c8f23ac3661f57e598c14973f922a8243 not found: ID does not exist" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.127254 4772 scope.go:117] "RemoveContainer" containerID="0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7" Nov 22 11:53:14 crc kubenswrapper[4772]: E1122 11:53:14.127588 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7\": container with ID starting with 0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7 not found: ID does not exist" containerID="0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.127617 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7"} err="failed to get container status \"0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7\": rpc error: code = NotFound desc = could not find container \"0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7\": container with ID starting with 0c24d5164bee2de311ad416ed4ffac28be96cc9534d9ef1192f03d97148cd7c7 not found: ID does not exist" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.127632 4772 scope.go:117] "RemoveContainer" containerID="60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197" Nov 22 11:53:14 crc kubenswrapper[4772]: E1122 11:53:14.127900 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197\": container with ID starting with 60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197 not found: ID does not exist" containerID="60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197" Nov 22 11:53:14 crc kubenswrapper[4772]: I1122 11:53:14.127924 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197"} err="failed to get container status \"60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197\": rpc error: code = NotFound desc = could not find container \"60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197\": container with ID starting with 60929ffb052c08bc75eb54651dc063f809cbeb59c78925f8309707fed5c84197 not found: ID does not exist" Nov 22 11:53:15 crc kubenswrapper[4772]: I1122 11:53:15.424896 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" path="/var/lib/kubelet/pods/45e0f526-cf03-435b-a8ce-af08fbf46c90/volumes" Nov 22 11:53:22 crc kubenswrapper[4772]: I1122 11:53:22.414967 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:53:22 crc kubenswrapper[4772]: E1122 11:53:22.416263 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:53:33 crc kubenswrapper[4772]: I1122 11:53:33.414413 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:53:33 crc kubenswrapper[4772]: E1122 11:53:33.415335 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:53:46 crc kubenswrapper[4772]: I1122 11:53:46.414236 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:53:46 crc kubenswrapper[4772]: E1122 11:53:46.415494 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:53:58 crc kubenswrapper[4772]: I1122 11:53:58.414811 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:53:58 crc kubenswrapper[4772]: E1122 11:53:58.415948 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:54:09 crc kubenswrapper[4772]: I1122 11:54:09.414557 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:54:09 crc kubenswrapper[4772]: E1122 11:54:09.415937 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:54:21 crc kubenswrapper[4772]: I1122 11:54:21.422663 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:54:21 crc kubenswrapper[4772]: E1122 11:54:21.423773 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:54:33 crc kubenswrapper[4772]: I1122 11:54:33.414467 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:54:33 crc kubenswrapper[4772]: E1122 11:54:33.415567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:54:44 crc kubenswrapper[4772]: I1122 11:54:44.414696 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:54:44 crc kubenswrapper[4772]: E1122 11:54:44.415628 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:54:55 crc kubenswrapper[4772]: I1122 11:54:55.414546 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:54:55 crc kubenswrapper[4772]: E1122 11:54:55.415631 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:55:08 crc kubenswrapper[4772]: I1122 11:55:08.413892 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:55:08 crc kubenswrapper[4772]: E1122 11:55:08.416378 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:55:22 crc kubenswrapper[4772]: I1122 11:55:22.414205 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:55:22 crc kubenswrapper[4772]: E1122 11:55:22.416415 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:55:35 crc kubenswrapper[4772]: I1122 11:55:35.413695 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:55:35 crc kubenswrapper[4772]: E1122 11:55:35.414706 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:55:47 crc kubenswrapper[4772]: I1122 11:55:47.417312 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:55:47 crc kubenswrapper[4772]: E1122 11:55:47.418289 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:55:59 crc kubenswrapper[4772]: I1122 11:55:59.415066 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:55:59 crc kubenswrapper[4772]: E1122 11:55:59.416400 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.752821 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9srp7"] Nov 22 11:56:05 crc kubenswrapper[4772]: E1122 11:56:05.754736 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ab1952-2f0d-47a1-8daa-83efcac38e42" containerName="storage" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.754790 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ab1952-2f0d-47a1-8daa-83efcac38e42" containerName="storage" Nov 22 11:56:05 crc kubenswrapper[4772]: E1122 11:56:05.754824 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="extract-utilities" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.754837 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="extract-utilities" Nov 22 11:56:05 crc kubenswrapper[4772]: E1122 11:56:05.754866 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="extract-content" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.754894 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="extract-content" Nov 22 11:56:05 crc kubenswrapper[4772]: E1122 11:56:05.754926 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="registry-server" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.754937 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="registry-server" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.755272 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e0f526-cf03-435b-a8ce-af08fbf46c90" containerName="registry-server" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.755296 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ab1952-2f0d-47a1-8daa-83efcac38e42" containerName="storage" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.757202 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.762415 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9srp7"] Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.889570 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-catalog-content\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.889648 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-utilities\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.889696 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkh8\" (UniqueName: \"kubernetes.io/projected/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-kube-api-access-9hkh8\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.990726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-catalog-content\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.990828 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-utilities\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.990867 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hkh8\" (UniqueName: \"kubernetes.io/projected/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-kube-api-access-9hkh8\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.991639 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-utilities\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:05 crc kubenswrapper[4772]: I1122 11:56:05.991764 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-catalog-content\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:06 crc kubenswrapper[4772]: I1122 11:56:06.020410 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hkh8\" (UniqueName: \"kubernetes.io/projected/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-kube-api-access-9hkh8\") pod \"redhat-operators-9srp7\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:06 crc kubenswrapper[4772]: I1122 11:56:06.114827 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:06 crc kubenswrapper[4772]: I1122 11:56:06.579704 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9srp7"] Nov 22 11:56:06 crc kubenswrapper[4772]: I1122 11:56:06.701872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9srp7" event={"ID":"9cccefc9-d001-4d07-b743-1ed43cb3fcb6","Type":"ContainerStarted","Data":"ce349fab48c674d905bf3ffdc3bf3b7e01f242275c5b9eb9ed42a9abf4c1b68c"} Nov 22 11:56:07 crc kubenswrapper[4772]: I1122 11:56:07.718907 4772 generic.go:334] "Generic (PLEG): container finished" podID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerID="72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7" exitCode=0 Nov 22 11:56:07 crc kubenswrapper[4772]: I1122 11:56:07.719348 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9srp7" event={"ID":"9cccefc9-d001-4d07-b743-1ed43cb3fcb6","Type":"ContainerDied","Data":"72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7"} Nov 22 11:56:09 crc kubenswrapper[4772]: I1122 11:56:09.741938 4772 generic.go:334] "Generic (PLEG): container finished" podID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerID="d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4" exitCode=0 Nov 22 11:56:09 crc kubenswrapper[4772]: I1122 11:56:09.742000 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9srp7" event={"ID":"9cccefc9-d001-4d07-b743-1ed43cb3fcb6","Type":"ContainerDied","Data":"d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4"} Nov 22 11:56:10 crc kubenswrapper[4772]: I1122 11:56:10.413967 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:56:10 crc kubenswrapper[4772]: E1122 11:56:10.414422 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:56:11 crc kubenswrapper[4772]: I1122 11:56:11.769614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9srp7" event={"ID":"9cccefc9-d001-4d07-b743-1ed43cb3fcb6","Type":"ContainerStarted","Data":"164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7"} Nov 22 11:56:11 crc kubenswrapper[4772]: I1122 11:56:11.792307 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9srp7" podStartSLOduration=4.009584095 podStartE2EDuration="6.792287263s" podCreationTimestamp="2025-11-22 11:56:05 +0000 UTC" firstStartedPulling="2025-11-22 11:56:07.722585218 +0000 UTC m=+4687.962029732" lastFinishedPulling="2025-11-22 11:56:10.505288396 +0000 UTC m=+4690.744732900" observedRunningTime="2025-11-22 11:56:11.785449424 +0000 UTC m=+4692.024893938" watchObservedRunningTime="2025-11-22 11:56:11.792287263 +0000 UTC m=+4692.031731757" Nov 22 11:56:16 crc kubenswrapper[4772]: I1122 11:56:16.115321 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:16 crc kubenswrapper[4772]: I1122 11:56:16.115704 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:17 crc kubenswrapper[4772]: I1122 11:56:17.164599 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9srp7" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="registry-server" probeResult="failure" output=< Nov 22 11:56:17 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 11:56:17 crc kubenswrapper[4772]: > Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.292874 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-mxp6s"] Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.295925 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.299174 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.299903 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.300383 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-427dz" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.314898 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.315685 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.322302 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-mxp6s"] Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.322518 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.322595 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b78z\" (UniqueName: \"kubernetes.io/projected/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-kube-api-access-8b78z\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.322636 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-config\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.423889 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-config\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.423984 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.424103 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b78z\" (UniqueName: \"kubernetes.io/projected/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-kube-api-access-8b78z\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.425240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.426234 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-config\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.491361 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b78z\" (UniqueName: \"kubernetes.io/projected/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-kube-api-access-8b78z\") pod \"dnsmasq-dns-5d7b5456f5-mxp6s\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.587842 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-cpjff"] Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.589756 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.600109 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-cpjff"] Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.642357 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-config\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.642448 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg85x\" (UniqueName: \"kubernetes.io/projected/8edd7b5d-fc65-408e-bf01-556a8e863a13-kube-api-access-jg85x\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.642495 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.643368 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.745177 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.745779 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-config\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.745826 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg85x\" (UniqueName: \"kubernetes.io/projected/8edd7b5d-fc65-408e-bf01-556a8e863a13-kube-api-access-jg85x\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.746906 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.747172 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-config\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:19 crc kubenswrapper[4772]: I1122 11:56:19.805171 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg85x\" (UniqueName: \"kubernetes.io/projected/8edd7b5d-fc65-408e-bf01-556a8e863a13-kube-api-access-jg85x\") pod \"dnsmasq-dns-98ddfc8f-cpjff\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.020501 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.189961 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-mxp6s"] Nov 22 11:56:20 crc kubenswrapper[4772]: W1122 11:56:20.206231 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf37f2c5d_74b6_4160_bdf7_997efb48d4d3.slice/crio-f1f56bae2749e30969cf26218de9c23afb689c2de71e07d1880b866bbf7b02ee WatchSource:0}: Error finding container f1f56bae2749e30969cf26218de9c23afb689c2de71e07d1880b866bbf7b02ee: Status 404 returned error can't find the container with id f1f56bae2749e30969cf26218de9c23afb689c2de71e07d1880b866bbf7b02ee Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.403962 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.405905 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.408011 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.408650 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.409241 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.415130 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qfbpk" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.415318 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.417971 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.475462 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-cpjff"] Nov 22 11:56:20 crc kubenswrapper[4772]: W1122 11:56:20.477891 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8edd7b5d_fc65_408e_bf01_556a8e863a13.slice/crio-6fc014614bdf542c3fe5f048d610b24c27a560e04faa2abed0238a93644326aa WatchSource:0}: Error finding container 6fc014614bdf542c3fe5f048d610b24c27a560e04faa2abed0238a93644326aa: Status 404 returned error can't find the container with id 6fc014614bdf542c3fe5f048d610b24c27a560e04faa2abed0238a93644326aa Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.560840 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.560886 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.560928 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvjg\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-kube-api-access-7hvjg\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.560950 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f25f971f-d811-4369-9c23-dbb1243592e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.560998 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.561057 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.561084 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.561118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.561154 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f25f971f-d811-4369-9c23-dbb1243592e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.663092 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.663186 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.663221 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.663744 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.664510 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.663295 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.664586 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f25f971f-d811-4369-9c23-dbb1243592e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.664624 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.664640 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.664671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.664678 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hvjg\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-kube-api-access-7hvjg\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.664748 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f25f971f-d811-4369-9c23-dbb1243592e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.665631 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.669900 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f25f971f-d811-4369-9c23-dbb1243592e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.670843 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.671079 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.671108 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/71ec0928dba49f1a5f0a48b1d05d2ea0a558615684f580940f9809492530205a/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.674709 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f25f971f-d811-4369-9c23-dbb1243592e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.683443 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hvjg\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-kube-api-access-7hvjg\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.716977 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.725966 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.770825 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.772696 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.778141 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.801461 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.801841 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-75p7n" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.802021 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.802113 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.802363 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869111 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869618 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869650 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn8s2\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-kube-api-access-rn8s2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869696 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869728 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869755 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c4b5659-ec0f-4746-91e1-9bb739a705f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869776 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869807 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c4b5659-ec0f-4746-91e1-9bb739a705f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.869838 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.879115 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" event={"ID":"f37f2c5d-74b6-4160-bdf7-997efb48d4d3","Type":"ContainerStarted","Data":"b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e"} Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.879183 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" event={"ID":"f37f2c5d-74b6-4160-bdf7-997efb48d4d3","Type":"ContainerStarted","Data":"f1f56bae2749e30969cf26218de9c23afb689c2de71e07d1880b866bbf7b02ee"} Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.882518 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" event={"ID":"8edd7b5d-fc65-408e-bf01-556a8e863a13","Type":"ContainerStarted","Data":"6fc014614bdf542c3fe5f048d610b24c27a560e04faa2abed0238a93644326aa"} Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972143 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c4b5659-ec0f-4746-91e1-9bb739a705f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972289 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972337 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn8s2\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-kube-api-access-rn8s2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972380 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972410 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972439 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c4b5659-ec0f-4746-91e1-9bb739a705f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.972492 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.973599 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.974006 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.974168 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.975275 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.976483 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.976520 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3660f7b661c9b03f43e00ce628cf8cce098e54bfb4198c614f0c94c33743d21/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.978253 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c4b5659-ec0f-4746-91e1-9bb739a705f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.978676 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.986097 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c4b5659-ec0f-4746-91e1-9bb739a705f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:20 crc kubenswrapper[4772]: I1122 11:56:20.988671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn8s2\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-kube-api-access-rn8s2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.005761 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.136728 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.188867 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:56:21 crc kubenswrapper[4772]: W1122 11:56:21.445364 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25f971f_d811_4369_9c23_dbb1243592e9.slice/crio-6195bfb6683a239cd32fb564e1c7d5e5b11939633ff65ea0edfb9c26b907acde WatchSource:0}: Error finding container 6195bfb6683a239cd32fb564e1c7d5e5b11939633ff65ea0edfb9c26b907acde: Status 404 returned error can't find the container with id 6195bfb6683a239cd32fb564e1c7d5e5b11939633ff65ea0edfb9c26b907acde Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.625895 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.628482 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.632542 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.632888 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.633365 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qgrcp" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.633494 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.634335 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.641765 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.657852 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683449 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683515 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683556 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-config-data-default\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683582 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-secrets\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683619 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683638 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwfx\" (UniqueName: \"kubernetes.io/projected/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-kube-api-access-6rwfx\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683685 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683705 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.683729 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-kolla-config\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.785461 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-secrets\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786003 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786034 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rwfx\" (UniqueName: \"kubernetes.io/projected/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-kube-api-access-6rwfx\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786117 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786152 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786180 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-kolla-config\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786272 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786322 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-config-data-default\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.786807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.787804 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-kolla-config\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.787901 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-config-data-default\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.789274 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.790650 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.791349 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-secrets\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.791770 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.791805 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ccac8600a6ff2e786b691c880d7d517503af29dcb5bb0045fb84c40efef6013a/globalmount\"" pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.793216 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.807096 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rwfx\" (UniqueName: \"kubernetes.io/projected/9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f-kube-api-access-6rwfx\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.848532 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b65f40a-7e2a-4139-abbb-a34fba58f467\") pod \"openstack-galera-0\" (UID: \"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f\") " pod="openstack/openstack-galera-0" Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.896800 4772 generic.go:334] "Generic (PLEG): container finished" podID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerID="b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e" exitCode=0 Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.896928 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" event={"ID":"f37f2c5d-74b6-4160-bdf7-997efb48d4d3","Type":"ContainerDied","Data":"b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e"} Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.908084 4772 generic.go:334] "Generic (PLEG): container finished" podID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerID="345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390" exitCode=0 Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.908220 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" event={"ID":"8edd7b5d-fc65-408e-bf01-556a8e863a13","Type":"ContainerDied","Data":"345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390"} Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.916836 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f25f971f-d811-4369-9c23-dbb1243592e9","Type":"ContainerStarted","Data":"6195bfb6683a239cd32fb564e1c7d5e5b11939633ff65ea0edfb9c26b907acde"} Nov 22 11:56:21 crc kubenswrapper[4772]: I1122 11:56:21.952341 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.015863 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.172239 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.174037 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.188547 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.191873 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.192332 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8zkdg" Nov 22 11:56:22 crc kubenswrapper[4772]: E1122 11:56:22.287856 4772 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 22 11:56:22 crc kubenswrapper[4772]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f37f2c5d-74b6-4160-bdf7-997efb48d4d3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 22 11:56:22 crc kubenswrapper[4772]: > podSandboxID="f1f56bae2749e30969cf26218de9c23afb689c2de71e07d1880b866bbf7b02ee" Nov 22 11:56:22 crc kubenswrapper[4772]: E1122 11:56:22.288349 4772 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 22 11:56:22 crc kubenswrapper[4772]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8b78z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d7b5456f5-mxp6s_openstack(f37f2c5d-74b6-4160-bdf7-997efb48d4d3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f37f2c5d-74b6-4160-bdf7-997efb48d4d3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 22 11:56:22 crc kubenswrapper[4772]: > logger="UnhandledError" Nov 22 11:56:22 crc kubenswrapper[4772]: E1122 11:56:22.295885 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f37f2c5d-74b6-4160-bdf7-997efb48d4d3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.304722 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-kolla-config\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.304806 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h6g\" (UniqueName: \"kubernetes.io/projected/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-kube-api-access-46h6g\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.304840 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-config-data\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.308574 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 11:56:22 crc kubenswrapper[4772]: W1122 11:56:22.318201 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9011bc5e_6c7d_4bc3_a426_1f0b0305bf2f.slice/crio-56910ddd0c19a42d83a535e757e982737a5880d0dc98340cff5b277de1a521ab WatchSource:0}: Error finding container 56910ddd0c19a42d83a535e757e982737a5880d0dc98340cff5b277de1a521ab: Status 404 returned error can't find the container with id 56910ddd0c19a42d83a535e757e982737a5880d0dc98340cff5b277de1a521ab Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.406424 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-kolla-config\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.406506 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46h6g\" (UniqueName: \"kubernetes.io/projected/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-kube-api-access-46h6g\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.406549 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-config-data\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.439253 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-config-data\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.439472 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-kolla-config\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.442571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46h6g\" (UniqueName: \"kubernetes.io/projected/dbe0ec0d-a61b-4782-ad96-815ff03ed7de-kube-api-access-46h6g\") pod \"memcached-0\" (UID: \"dbe0ec0d-a61b-4782-ad96-815ff03ed7de\") " pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.495173 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.930609 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" event={"ID":"8edd7b5d-fc65-408e-bf01-556a8e863a13","Type":"ContainerStarted","Data":"1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b"} Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.931630 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.933019 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f","Type":"ContainerStarted","Data":"6ddbe745970e75d1c3f7cc9e7a5fdb989a6bcf110c14396d573dc4a85bd31762"} Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.933122 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f","Type":"ContainerStarted","Data":"56910ddd0c19a42d83a535e757e982737a5880d0dc98340cff5b277de1a521ab"} Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.936968 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c4b5659-ec0f-4746-91e1-9bb739a705f8","Type":"ContainerStarted","Data":"2adaa123cb76646112178b4a9220585e871383452bc69de15f092e5c47204308"} Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.962445 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 11:56:22 crc kubenswrapper[4772]: I1122 11:56:22.964694 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" podStartSLOduration=3.964669009 podStartE2EDuration="3.964669009s" podCreationTimestamp="2025-11-22 11:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:56:22.950551082 +0000 UTC m=+4703.189995606" watchObservedRunningTime="2025-11-22 11:56:22.964669009 +0000 UTC m=+4703.204113503" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.340382 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.343110 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.347169 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-n6h46" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.348230 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.348398 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.349576 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.375291 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.424717 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.424778 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngf2\" (UniqueName: \"kubernetes.io/projected/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-kube-api-access-gngf2\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.424880 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.424937 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.424972 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.425158 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.425441 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.425609 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.425695 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.527794 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.527859 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.527886 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.527960 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.527996 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gngf2\" (UniqueName: \"kubernetes.io/projected/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-kube-api-access-gngf2\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.528068 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.528114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.528144 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.528172 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.528816 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.529421 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.529631 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.529714 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.531723 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.532705 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.534114 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.534346 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.534378 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4749894be12cb1639fa404630180a771b98fc13ee3ec37cfa133cf14204e03de/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: W1122 11:56:23.544285 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe0ec0d_a61b_4782_ad96_815ff03ed7de.slice/crio-8ef17f9d20108e3f79920c3916f3883541b635753a5732aee97dff692606bd10 WatchSource:0}: Error finding container 8ef17f9d20108e3f79920c3916f3883541b635753a5732aee97dff692606bd10: Status 404 returned error can't find the container with id 8ef17f9d20108e3f79920c3916f3883541b635753a5732aee97dff692606bd10 Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.560741 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gngf2\" (UniqueName: \"kubernetes.io/projected/c115c1ee-d75c-4d15-9c61-e3a17dec5c3a-kube-api-access-gngf2\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.577697 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-279f5e55-5838-4a52-a22b-1b52cb48037c\") pod \"openstack-cell1-galera-0\" (UID: \"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a\") " pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.681714 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.951993 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" event={"ID":"f37f2c5d-74b6-4160-bdf7-997efb48d4d3","Type":"ContainerStarted","Data":"6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b"} Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.952718 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.954492 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f25f971f-d811-4369-9c23-dbb1243592e9","Type":"ContainerStarted","Data":"ed56f837c5573efe8021f3c96bb4e4bc5ef07fd10b3da429793e9ed466861c75"} Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.956456 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dbe0ec0d-a61b-4782-ad96-815ff03ed7de","Type":"ContainerStarted","Data":"0c30d0dc2ca21f10888ee5d53b976cdf5410f0a43c50257d63fbec84a976e014"} Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.956481 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dbe0ec0d-a61b-4782-ad96-815ff03ed7de","Type":"ContainerStarted","Data":"8ef17f9d20108e3f79920c3916f3883541b635753a5732aee97dff692606bd10"} Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.956901 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 22 11:56:23 crc kubenswrapper[4772]: I1122 11:56:23.974297 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" podStartSLOduration=4.974271601 podStartE2EDuration="4.974271601s" podCreationTimestamp="2025-11-22 11:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:56:23.969942214 +0000 UTC m=+4704.209386788" watchObservedRunningTime="2025-11-22 11:56:23.974271601 +0000 UTC m=+4704.213716095" Nov 22 11:56:24 crc kubenswrapper[4772]: I1122 11:56:24.015983 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.015946925 podStartE2EDuration="2.015946925s" podCreationTimestamp="2025-11-22 11:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:56:23.992221812 +0000 UTC m=+4704.231666306" watchObservedRunningTime="2025-11-22 11:56:24.015946925 +0000 UTC m=+4704.255391419" Nov 22 11:56:24 crc kubenswrapper[4772]: I1122 11:56:24.301402 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 11:56:24 crc kubenswrapper[4772]: W1122 11:56:24.307991 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc115c1ee_d75c_4d15_9c61_e3a17dec5c3a.slice/crio-72022ba54d9930be04fef26a24489f643b3b0158b9a7d64e50a5203287d55554 WatchSource:0}: Error finding container 72022ba54d9930be04fef26a24489f643b3b0158b9a7d64e50a5203287d55554: Status 404 returned error can't find the container with id 72022ba54d9930be04fef26a24489f643b3b0158b9a7d64e50a5203287d55554 Nov 22 11:56:24 crc kubenswrapper[4772]: I1122 11:56:24.413384 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:56:24 crc kubenswrapper[4772]: E1122 11:56:24.413683 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 11:56:24 crc kubenswrapper[4772]: I1122 11:56:24.973101 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a","Type":"ContainerStarted","Data":"cb848389a29b855566292fc0ad145a580031ab6faeff986753488499f5cc71e6"} Nov 22 11:56:24 crc kubenswrapper[4772]: I1122 11:56:24.973723 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a","Type":"ContainerStarted","Data":"72022ba54d9930be04fef26a24489f643b3b0158b9a7d64e50a5203287d55554"} Nov 22 11:56:24 crc kubenswrapper[4772]: I1122 11:56:24.976700 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c4b5659-ec0f-4746-91e1-9bb739a705f8","Type":"ContainerStarted","Data":"74e104ae1b1709cb905ee6c27d3bba4e9aa3c6ee05e9468899c78bf2f6ad38f2"} Nov 22 11:56:26 crc kubenswrapper[4772]: I1122 11:56:26.184532 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:26 crc kubenswrapper[4772]: I1122 11:56:26.243268 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:26 crc kubenswrapper[4772]: I1122 11:56:26.438146 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9srp7"] Nov 22 11:56:27 crc kubenswrapper[4772]: I1122 11:56:27.010314 4772 generic.go:334] "Generic (PLEG): container finished" podID="9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f" containerID="6ddbe745970e75d1c3f7cc9e7a5fdb989a6bcf110c14396d573dc4a85bd31762" exitCode=0 Nov 22 11:56:27 crc kubenswrapper[4772]: I1122 11:56:27.010545 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f","Type":"ContainerDied","Data":"6ddbe745970e75d1c3f7cc9e7a5fdb989a6bcf110c14396d573dc4a85bd31762"} Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.019754 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f","Type":"ContainerStarted","Data":"1d310159eca4cf24e0d8f59eb764f96fb7431b426cc62b9b1402df02cc3e7b87"} Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.022951 4772 generic.go:334] "Generic (PLEG): container finished" podID="c115c1ee-d75c-4d15-9c61-e3a17dec5c3a" containerID="cb848389a29b855566292fc0ad145a580031ab6faeff986753488499f5cc71e6" exitCode=0 Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.022980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a","Type":"ContainerDied","Data":"cb848389a29b855566292fc0ad145a580031ab6faeff986753488499f5cc71e6"} Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.023405 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9srp7" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="registry-server" containerID="cri-o://164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7" gracePeriod=2 Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.060379 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.060292946 podStartE2EDuration="8.060292946s" podCreationTimestamp="2025-11-22 11:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:56:28.047701986 +0000 UTC m=+4708.287146490" watchObservedRunningTime="2025-11-22 11:56:28.060292946 +0000 UTC m=+4708.299737450" Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.450464 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.536357 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-utilities\") pod \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.536589 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hkh8\" (UniqueName: \"kubernetes.io/projected/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-kube-api-access-9hkh8\") pod \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.536626 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-catalog-content\") pod \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\" (UID: \"9cccefc9-d001-4d07-b743-1ed43cb3fcb6\") " Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.539096 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-utilities" (OuterVolumeSpecName: "utilities") pod "9cccefc9-d001-4d07-b743-1ed43cb3fcb6" (UID: "9cccefc9-d001-4d07-b743-1ed43cb3fcb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.544825 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-kube-api-access-9hkh8" (OuterVolumeSpecName: "kube-api-access-9hkh8") pod "9cccefc9-d001-4d07-b743-1ed43cb3fcb6" (UID: "9cccefc9-d001-4d07-b743-1ed43cb3fcb6"). InnerVolumeSpecName "kube-api-access-9hkh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.629872 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cccefc9-d001-4d07-b743-1ed43cb3fcb6" (UID: "9cccefc9-d001-4d07-b743-1ed43cb3fcb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.638738 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hkh8\" (UniqueName: \"kubernetes.io/projected/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-kube-api-access-9hkh8\") on node \"crc\" DevicePath \"\"" Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.638788 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 11:56:28 crc kubenswrapper[4772]: I1122 11:56:28.638809 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cccefc9-d001-4d07-b743-1ed43cb3fcb6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.032599 4772 generic.go:334] "Generic (PLEG): container finished" podID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerID="164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7" exitCode=0 Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.033683 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9srp7" event={"ID":"9cccefc9-d001-4d07-b743-1ed43cb3fcb6","Type":"ContainerDied","Data":"164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7"} Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.033720 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9srp7" event={"ID":"9cccefc9-d001-4d07-b743-1ed43cb3fcb6","Type":"ContainerDied","Data":"ce349fab48c674d905bf3ffdc3bf3b7e01f242275c5b9eb9ed42a9abf4c1b68c"} Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.033740 4772 scope.go:117] "RemoveContainer" containerID="164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.033892 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9srp7" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.046440 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c115c1ee-d75c-4d15-9c61-e3a17dec5c3a","Type":"ContainerStarted","Data":"e062c12928a595b1a984f50515f3c4b11d97d039ca0b5bd42383260ed58520f7"} Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.071663 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.071643841 podStartE2EDuration="7.071643841s" podCreationTimestamp="2025-11-22 11:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:56:29.066132586 +0000 UTC m=+4709.305577100" watchObservedRunningTime="2025-11-22 11:56:29.071643841 +0000 UTC m=+4709.311088335" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.093508 4772 scope.go:117] "RemoveContainer" containerID="d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.093547 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9srp7"] Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.100062 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9srp7"] Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.119690 4772 scope.go:117] "RemoveContainer" containerID="72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.168246 4772 scope.go:117] "RemoveContainer" containerID="164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7" Nov 22 11:56:29 crc kubenswrapper[4772]: E1122 11:56:29.168930 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7\": container with ID starting with 164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7 not found: ID does not exist" containerID="164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.168991 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7"} err="failed to get container status \"164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7\": rpc error: code = NotFound desc = could not find container \"164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7\": container with ID starting with 164e421ceee32d167c6c11bff66d5a3adfc513ded065c00f19db6fb00e77a4c7 not found: ID does not exist" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.169028 4772 scope.go:117] "RemoveContainer" containerID="d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4" Nov 22 11:56:29 crc kubenswrapper[4772]: E1122 11:56:29.169736 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4\": container with ID starting with d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4 not found: ID does not exist" containerID="d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.169765 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4"} err="failed to get container status \"d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4\": rpc error: code = NotFound desc = could not find container \"d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4\": container with ID starting with d9343364b48ae93be545ba3dc285698f105f8b5fb58db012a93423bbde71a4a4 not found: ID does not exist" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.169782 4772 scope.go:117] "RemoveContainer" containerID="72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7" Nov 22 11:56:29 crc kubenswrapper[4772]: E1122 11:56:29.170934 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7\": container with ID starting with 72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7 not found: ID does not exist" containerID="72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.170981 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7"} err="failed to get container status \"72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7\": rpc error: code = NotFound desc = could not find container \"72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7\": container with ID starting with 72c8c2e9fbca8c5d4775c523329ac07adff500ab52cc33052dfb1a9c5594d3d7 not found: ID does not exist" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.426281 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" path="/var/lib/kubelet/pods/9cccefc9-d001-4d07-b743-1ed43cb3fcb6/volumes" Nov 22 11:56:29 crc kubenswrapper[4772]: I1122 11:56:29.645364 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.022319 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.098743 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-mxp6s"] Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.099107 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerName="dnsmasq-dns" containerID="cri-o://6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b" gracePeriod=10 Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.579209 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.679251 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-dns-svc\") pod \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.679357 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b78z\" (UniqueName: \"kubernetes.io/projected/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-kube-api-access-8b78z\") pod \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.679499 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-config\") pod \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\" (UID: \"f37f2c5d-74b6-4160-bdf7-997efb48d4d3\") " Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.701280 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-kube-api-access-8b78z" (OuterVolumeSpecName: "kube-api-access-8b78z") pod "f37f2c5d-74b6-4160-bdf7-997efb48d4d3" (UID: "f37f2c5d-74b6-4160-bdf7-997efb48d4d3"). InnerVolumeSpecName "kube-api-access-8b78z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.723435 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f37f2c5d-74b6-4160-bdf7-997efb48d4d3" (UID: "f37f2c5d-74b6-4160-bdf7-997efb48d4d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.734714 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-config" (OuterVolumeSpecName: "config") pod "f37f2c5d-74b6-4160-bdf7-997efb48d4d3" (UID: "f37f2c5d-74b6-4160-bdf7-997efb48d4d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.781875 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.782340 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:56:30 crc kubenswrapper[4772]: I1122 11:56:30.782354 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b78z\" (UniqueName: \"kubernetes.io/projected/f37f2c5d-74b6-4160-bdf7-997efb48d4d3-kube-api-access-8b78z\") on node \"crc\" DevicePath \"\"" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.069093 4772 generic.go:334] "Generic (PLEG): container finished" podID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerID="6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b" exitCode=0 Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.069173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" event={"ID":"f37f2c5d-74b6-4160-bdf7-997efb48d4d3","Type":"ContainerDied","Data":"6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b"} Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.069263 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.069308 4772 scope.go:117] "RemoveContainer" containerID="6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.069283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-mxp6s" event={"ID":"f37f2c5d-74b6-4160-bdf7-997efb48d4d3","Type":"ContainerDied","Data":"f1f56bae2749e30969cf26218de9c23afb689c2de71e07d1880b866bbf7b02ee"} Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.102283 4772 scope.go:117] "RemoveContainer" containerID="b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.117420 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-mxp6s"] Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.125431 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-mxp6s"] Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.134902 4772 scope.go:117] "RemoveContainer" containerID="6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b" Nov 22 11:56:31 crc kubenswrapper[4772]: E1122 11:56:31.135730 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b\": container with ID starting with 6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b not found: ID does not exist" containerID="6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.135803 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b"} err="failed to get container status \"6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b\": rpc error: code = NotFound desc = could not find container \"6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b\": container with ID starting with 6cc98c2b156b3ad276d90730371d6288b0c4f6291e9c59af8a9ff5c81932322b not found: ID does not exist" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.135853 4772 scope.go:117] "RemoveContainer" containerID="b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e" Nov 22 11:56:31 crc kubenswrapper[4772]: E1122 11:56:31.136355 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e\": container with ID starting with b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e not found: ID does not exist" containerID="b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.136411 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e"} err="failed to get container status \"b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e\": rpc error: code = NotFound desc = could not find container \"b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e\": container with ID starting with b116abb1eac92880788343c99ebbe1dfa2f3ec3cd5731df89d780fedab77cd8e not found: ID does not exist" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.436491 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" path="/var/lib/kubelet/pods/f37f2c5d-74b6-4160-bdf7-997efb48d4d3/volumes" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.952999 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 22 11:56:31 crc kubenswrapper[4772]: I1122 11:56:31.953073 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 22 11:56:32 crc kubenswrapper[4772]: I1122 11:56:32.497446 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 22 11:56:33 crc kubenswrapper[4772]: I1122 11:56:33.682327 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:33 crc kubenswrapper[4772]: I1122 11:56:33.682591 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:34 crc kubenswrapper[4772]: I1122 11:56:34.026067 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 22 11:56:34 crc kubenswrapper[4772]: I1122 11:56:34.083238 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 22 11:56:35 crc kubenswrapper[4772]: I1122 11:56:35.413714 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 11:56:36 crc kubenswrapper[4772]: I1122 11:56:36.053647 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:36 crc kubenswrapper[4772]: I1122 11:56:36.133642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"68d813d9b094f770a392ffde3e0c1a4b8d11b83d190b2a08e521c804f76c3e77"} Nov 22 11:56:36 crc kubenswrapper[4772]: I1122 11:56:36.142530 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 22 11:56:55 crc kubenswrapper[4772]: I1122 11:56:55.501316 4772 generic.go:334] "Generic (PLEG): container finished" podID="f25f971f-d811-4369-9c23-dbb1243592e9" containerID="ed56f837c5573efe8021f3c96bb4e4bc5ef07fd10b3da429793e9ed466861c75" exitCode=0 Nov 22 11:56:55 crc kubenswrapper[4772]: I1122 11:56:55.501405 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f25f971f-d811-4369-9c23-dbb1243592e9","Type":"ContainerDied","Data":"ed56f837c5573efe8021f3c96bb4e4bc5ef07fd10b3da429793e9ed466861c75"} Nov 22 11:56:56 crc kubenswrapper[4772]: I1122 11:56:56.515743 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f25f971f-d811-4369-9c23-dbb1243592e9","Type":"ContainerStarted","Data":"73778e5820bc477625b3e0b9f111cda1c7d577b7233878df54d17ca3f8394243"} Nov 22 11:56:56 crc kubenswrapper[4772]: I1122 11:56:56.516557 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 11:56:57 crc kubenswrapper[4772]: I1122 11:56:57.530163 4772 generic.go:334] "Generic (PLEG): container finished" podID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerID="74e104ae1b1709cb905ee6c27d3bba4e9aa3c6ee05e9468899c78bf2f6ad38f2" exitCode=0 Nov 22 11:56:57 crc kubenswrapper[4772]: I1122 11:56:57.530318 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c4b5659-ec0f-4746-91e1-9bb739a705f8","Type":"ContainerDied","Data":"74e104ae1b1709cb905ee6c27d3bba4e9aa3c6ee05e9468899c78bf2f6ad38f2"} Nov 22 11:56:57 crc kubenswrapper[4772]: I1122 11:56:57.569667 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.569643693 podStartE2EDuration="38.569643693s" podCreationTimestamp="2025-11-22 11:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:56:56.558299058 +0000 UTC m=+4736.797743562" watchObservedRunningTime="2025-11-22 11:56:57.569643693 +0000 UTC m=+4737.809088187" Nov 22 11:56:58 crc kubenswrapper[4772]: I1122 11:56:58.540349 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c4b5659-ec0f-4746-91e1-9bb739a705f8","Type":"ContainerStarted","Data":"7bf4f3f3da35f6d91e034424806dbbed8b579cdf92ababc36cd76573598996ec"} Nov 22 11:56:58 crc kubenswrapper[4772]: I1122 11:56:58.541109 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:56:58 crc kubenswrapper[4772]: I1122 11:56:58.562654 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.562635636 podStartE2EDuration="39.562635636s" podCreationTimestamp="2025-11-22 11:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:56:58.562099073 +0000 UTC m=+4738.801543567" watchObservedRunningTime="2025-11-22 11:56:58.562635636 +0000 UTC m=+4738.802080130" Nov 22 11:57:10 crc kubenswrapper[4772]: I1122 11:57:10.730239 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 11:57:11 crc kubenswrapper[4772]: I1122 11:57:11.140885 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.719660 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-n5jtk"] Nov 22 11:57:15 crc kubenswrapper[4772]: E1122 11:57:15.722170 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerName="init" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.722311 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerName="init" Nov 22 11:57:15 crc kubenswrapper[4772]: E1122 11:57:15.722407 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerName="dnsmasq-dns" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.722491 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerName="dnsmasq-dns" Nov 22 11:57:15 crc kubenswrapper[4772]: E1122 11:57:15.722565 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="extract-content" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.722644 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="extract-content" Nov 22 11:57:15 crc kubenswrapper[4772]: E1122 11:57:15.722730 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="extract-utilities" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.722815 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="extract-utilities" Nov 22 11:57:15 crc kubenswrapper[4772]: E1122 11:57:15.722904 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="registry-server" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.722983 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="registry-server" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.723316 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37f2c5d-74b6-4160-bdf7-997efb48d4d3" containerName="dnsmasq-dns" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.723424 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cccefc9-d001-4d07-b743-1ed43cb3fcb6" containerName="registry-server" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.724682 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.733865 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-n5jtk"] Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.877531 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.877658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-config\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.877713 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzn6m\" (UniqueName: \"kubernetes.io/projected/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-kube-api-access-tzn6m\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.979229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-config\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.979310 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzn6m\" (UniqueName: \"kubernetes.io/projected/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-kube-api-access-tzn6m\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.979374 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.980624 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-config\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:15 crc kubenswrapper[4772]: I1122 11:57:15.980691 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:16 crc kubenswrapper[4772]: I1122 11:57:16.003640 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzn6m\" (UniqueName: \"kubernetes.io/projected/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-kube-api-access-tzn6m\") pod \"dnsmasq-dns-5b7946d7b9-n5jtk\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:16 crc kubenswrapper[4772]: I1122 11:57:16.047615 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:16 crc kubenswrapper[4772]: I1122 11:57:16.291173 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-n5jtk"] Nov 22 11:57:16 crc kubenswrapper[4772]: W1122 11:57:16.299591 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82f4e8a_aee4_4ab6_b5e1_9f3930393cfe.slice/crio-7408adec87f22cf8bb4f9e76ce0aa809606cf71105d326d90910c7cf4fadfe6e WatchSource:0}: Error finding container 7408adec87f22cf8bb4f9e76ce0aa809606cf71105d326d90910c7cf4fadfe6e: Status 404 returned error can't find the container with id 7408adec87f22cf8bb4f9e76ce0aa809606cf71105d326d90910c7cf4fadfe6e Nov 22 11:57:16 crc kubenswrapper[4772]: I1122 11:57:16.648199 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:57:16 crc kubenswrapper[4772]: I1122 11:57:16.714545 4772 generic.go:334] "Generic (PLEG): container finished" podID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerID="5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a" exitCode=0 Nov 22 11:57:16 crc kubenswrapper[4772]: I1122 11:57:16.714601 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" event={"ID":"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe","Type":"ContainerDied","Data":"5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a"} Nov 22 11:57:16 crc kubenswrapper[4772]: I1122 11:57:16.714634 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" event={"ID":"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe","Type":"ContainerStarted","Data":"7408adec87f22cf8bb4f9e76ce0aa809606cf71105d326d90910c7cf4fadfe6e"} Nov 22 11:57:17 crc kubenswrapper[4772]: I1122 11:57:17.600393 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:57:17 crc kubenswrapper[4772]: I1122 11:57:17.723468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" event={"ID":"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe","Type":"ContainerStarted","Data":"380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403"} Nov 22 11:57:17 crc kubenswrapper[4772]: I1122 11:57:17.724136 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:17 crc kubenswrapper[4772]: I1122 11:57:17.747271 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" podStartSLOduration=2.747247868 podStartE2EDuration="2.747247868s" podCreationTimestamp="2025-11-22 11:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:57:17.745402713 +0000 UTC m=+4757.984847217" watchObservedRunningTime="2025-11-22 11:57:17.747247868 +0000 UTC m=+4757.986692362" Nov 22 11:57:18 crc kubenswrapper[4772]: I1122 11:57:18.483970 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" containerName="rabbitmq" containerID="cri-o://73778e5820bc477625b3e0b9f111cda1c7d577b7233878df54d17ca3f8394243" gracePeriod=604799 Nov 22 11:57:19 crc kubenswrapper[4772]: I1122 11:57:19.435522 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerName="rabbitmq" containerID="cri-o://7bf4f3f3da35f6d91e034424806dbbed8b579cdf92ababc36cd76573598996ec" gracePeriod=604799 Nov 22 11:57:20 crc kubenswrapper[4772]: I1122 11:57:20.728517 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.241:5672: connect: connection refused" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.050258 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.137845 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.242:5672: connect: connection refused" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.151750 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-cpjff"] Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.152826 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" podUID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerName="dnsmasq-dns" containerID="cri-o://1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b" gracePeriod=10 Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.581887 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.685933 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-dns-svc\") pod \"8edd7b5d-fc65-408e-bf01-556a8e863a13\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.686020 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg85x\" (UniqueName: \"kubernetes.io/projected/8edd7b5d-fc65-408e-bf01-556a8e863a13-kube-api-access-jg85x\") pod \"8edd7b5d-fc65-408e-bf01-556a8e863a13\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.686102 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-config\") pod \"8edd7b5d-fc65-408e-bf01-556a8e863a13\" (UID: \"8edd7b5d-fc65-408e-bf01-556a8e863a13\") " Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.705591 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8edd7b5d-fc65-408e-bf01-556a8e863a13-kube-api-access-jg85x" (OuterVolumeSpecName: "kube-api-access-jg85x") pod "8edd7b5d-fc65-408e-bf01-556a8e863a13" (UID: "8edd7b5d-fc65-408e-bf01-556a8e863a13"). InnerVolumeSpecName "kube-api-access-jg85x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.740995 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8edd7b5d-fc65-408e-bf01-556a8e863a13" (UID: "8edd7b5d-fc65-408e-bf01-556a8e863a13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.776112 4772 generic.go:334] "Generic (PLEG): container finished" podID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerID="1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b" exitCode=0 Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.776178 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" event={"ID":"8edd7b5d-fc65-408e-bf01-556a8e863a13","Type":"ContainerDied","Data":"1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b"} Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.776216 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" event={"ID":"8edd7b5d-fc65-408e-bf01-556a8e863a13","Type":"ContainerDied","Data":"6fc014614bdf542c3fe5f048d610b24c27a560e04faa2abed0238a93644326aa"} Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.776245 4772 scope.go:117] "RemoveContainer" containerID="1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.776443 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-cpjff" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.788280 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg85x\" (UniqueName: \"kubernetes.io/projected/8edd7b5d-fc65-408e-bf01-556a8e863a13-kube-api-access-jg85x\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.788323 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.793862 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-config" (OuterVolumeSpecName: "config") pod "8edd7b5d-fc65-408e-bf01-556a8e863a13" (UID: "8edd7b5d-fc65-408e-bf01-556a8e863a13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.814281 4772 scope.go:117] "RemoveContainer" containerID="345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.847269 4772 scope.go:117] "RemoveContainer" containerID="1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b" Nov 22 11:57:21 crc kubenswrapper[4772]: E1122 11:57:21.851203 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b\": container with ID starting with 1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b not found: ID does not exist" containerID="1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.851258 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b"} err="failed to get container status \"1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b\": rpc error: code = NotFound desc = could not find container \"1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b\": container with ID starting with 1ba0c50da0fadf5f9be815faa3819d1f4291d10302c9c0637718292bea23544b not found: ID does not exist" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.851298 4772 scope.go:117] "RemoveContainer" containerID="345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390" Nov 22 11:57:21 crc kubenswrapper[4772]: E1122 11:57:21.851689 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390\": container with ID starting with 345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390 not found: ID does not exist" containerID="345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.851739 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390"} err="failed to get container status \"345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390\": rpc error: code = NotFound desc = could not find container \"345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390\": container with ID starting with 345db6406d568926246fae56059f8a39ccddfa6c09970fdb9b2a2d8015120390 not found: ID does not exist" Nov 22 11:57:21 crc kubenswrapper[4772]: I1122 11:57:21.889464 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edd7b5d-fc65-408e-bf01-556a8e863a13-config\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:22 crc kubenswrapper[4772]: I1122 11:57:22.106425 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-cpjff"] Nov 22 11:57:22 crc kubenswrapper[4772]: I1122 11:57:22.112439 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-cpjff"] Nov 22 11:57:23 crc kubenswrapper[4772]: I1122 11:57:23.429437 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8edd7b5d-fc65-408e-bf01-556a8e863a13" path="/var/lib/kubelet/pods/8edd7b5d-fc65-408e-bf01-556a8e863a13/volumes" Nov 22 11:57:24 crc kubenswrapper[4772]: I1122 11:57:24.821120 4772 generic.go:334] "Generic (PLEG): container finished" podID="f25f971f-d811-4369-9c23-dbb1243592e9" containerID="73778e5820bc477625b3e0b9f111cda1c7d577b7233878df54d17ca3f8394243" exitCode=0 Nov 22 11:57:24 crc kubenswrapper[4772]: I1122 11:57:24.821255 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f25f971f-d811-4369-9c23-dbb1243592e9","Type":"ContainerDied","Data":"73778e5820bc477625b3e0b9f111cda1c7d577b7233878df54d17ca3f8394243"} Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.103253 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149612 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f25f971f-d811-4369-9c23-dbb1243592e9-pod-info\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149714 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-erlang-cookie\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149755 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-plugins\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149778 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-confd\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149810 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hvjg\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-kube-api-access-7hvjg\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149848 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-server-conf\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149893 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-plugins-conf\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.149950 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f25f971f-d811-4369-9c23-dbb1243592e9-erlang-cookie-secret\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.150098 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"f25f971f-d811-4369-9c23-dbb1243592e9\" (UID: \"f25f971f-d811-4369-9c23-dbb1243592e9\") " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.151616 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.151891 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.152496 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.167709 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-kube-api-access-7hvjg" (OuterVolumeSpecName: "kube-api-access-7hvjg") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "kube-api-access-7hvjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.175272 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25f971f-d811-4369-9c23-dbb1243592e9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.181212 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f25f971f-d811-4369-9c23-dbb1243592e9-pod-info" (OuterVolumeSpecName: "pod-info") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.191620 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d" (OuterVolumeSpecName: "persistence") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.193539 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-server-conf" (OuterVolumeSpecName: "server-conf") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253831 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253869 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253879 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hvjg\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-kube-api-access-7hvjg\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253890 4772 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253901 4772 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f25f971f-d811-4369-9c23-dbb1243592e9-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253911 4772 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f25f971f-d811-4369-9c23-dbb1243592e9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253958 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") on node \"crc\" " Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.253972 4772 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f25f971f-d811-4369-9c23-dbb1243592e9-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.258729 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f25f971f-d811-4369-9c23-dbb1243592e9" (UID: "f25f971f-d811-4369-9c23-dbb1243592e9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.277023 4772 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.277274 4772 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d") on node "crc" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.355473 4772 reconciler_common.go:293] "Volume detached for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.355520 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f25f971f-d811-4369-9c23-dbb1243592e9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.829767 4772 generic.go:334] "Generic (PLEG): container finished" podID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerID="7bf4f3f3da35f6d91e034424806dbbed8b579cdf92ababc36cd76573598996ec" exitCode=0 Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.830250 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c4b5659-ec0f-4746-91e1-9bb739a705f8","Type":"ContainerDied","Data":"7bf4f3f3da35f6d91e034424806dbbed8b579cdf92ababc36cd76573598996ec"} Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.832415 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f25f971f-d811-4369-9c23-dbb1243592e9","Type":"ContainerDied","Data":"6195bfb6683a239cd32fb564e1c7d5e5b11939633ff65ea0edfb9c26b907acde"} Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.832451 4772 scope.go:117] "RemoveContainer" containerID="73778e5820bc477625b3e0b9f111cda1c7d577b7233878df54d17ca3f8394243" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.832584 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.864808 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.867109 4772 scope.go:117] "RemoveContainer" containerID="ed56f837c5573efe8021f3c96bb4e4bc5ef07fd10b3da429793e9ed466861c75" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.872790 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.912101 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:57:25 crc kubenswrapper[4772]: E1122 11:57:25.913266 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerName="init" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.913291 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerName="init" Nov 22 11:57:25 crc kubenswrapper[4772]: E1122 11:57:25.913312 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" containerName="rabbitmq" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.913321 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" containerName="rabbitmq" Nov 22 11:57:25 crc kubenswrapper[4772]: E1122 11:57:25.913350 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" containerName="setup-container" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.913359 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" containerName="setup-container" Nov 22 11:57:25 crc kubenswrapper[4772]: E1122 11:57:25.913380 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerName="dnsmasq-dns" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.913387 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerName="dnsmasq-dns" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.913666 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8edd7b5d-fc65-408e-bf01-556a8e863a13" containerName="dnsmasq-dns" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.913680 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" containerName="rabbitmq" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.921858 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.927855 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.930199 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qfbpk" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.931271 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.931470 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.931652 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 11:57:25 crc kubenswrapper[4772]: I1122 11:57:25.931721 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.068626 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pswcp\" (UniqueName: \"kubernetes.io/projected/7f5b0814-3abb-4b82-a919-305b358a05d0-kube-api-access-pswcp\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.068675 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f5b0814-3abb-4b82-a919-305b358a05d0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.068766 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.068789 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.068970 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f5b0814-3abb-4b82-a919-305b358a05d0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.069023 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f5b0814-3abb-4b82-a919-305b358a05d0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.069066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f5b0814-3abb-4b82-a919-305b358a05d0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.069128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.069525 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.109118 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171244 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-plugins\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171308 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-plugins-conf\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171399 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-confd\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171428 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c4b5659-ec0f-4746-91e1-9bb739a705f8-erlang-cookie-secret\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171621 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171722 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-erlang-cookie\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171766 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-server-conf\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171801 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c4b5659-ec0f-4746-91e1-9bb739a705f8-pod-info\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171810 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171843 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn8s2\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-kube-api-access-rn8s2\") pod \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\" (UID: \"8c4b5659-ec0f-4746-91e1-9bb739a705f8\") " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.172079 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.171949 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.172130 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.172170 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pswcp\" (UniqueName: \"kubernetes.io/projected/7f5b0814-3abb-4b82-a919-305b358a05d0-kube-api-access-pswcp\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.188363 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-kube-api-access-rn8s2" (OuterVolumeSpecName: "kube-api-access-rn8s2") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "kube-api-access-rn8s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.189607 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8c4b5659-ec0f-4746-91e1-9bb739a705f8-pod-info" (OuterVolumeSpecName: "pod-info") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.190504 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c4b5659-ec0f-4746-91e1-9bb739a705f8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.190958 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.191134 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f5b0814-3abb-4b82-a919-305b358a05d0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.191943 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192213 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f5b0814-3abb-4b82-a919-305b358a05d0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f5b0814-3abb-4b82-a919-305b358a05d0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192341 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f5b0814-3abb-4b82-a919-305b358a05d0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192540 4772 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c4b5659-ec0f-4746-91e1-9bb739a705f8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192571 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192587 4772 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c4b5659-ec0f-4746-91e1-9bb739a705f8-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192600 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn8s2\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-kube-api-access-rn8s2\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192612 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.192634 4772 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.195267 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.195700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.199783 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f5b0814-3abb-4b82-a919-305b358a05d0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.207655 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7" (OuterVolumeSpecName: "persistence") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "pvc-14885645-cecf-47af-adfb-df57d18543c7". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.207700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f5b0814-3abb-4b82-a919-305b358a05d0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.208390 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f5b0814-3abb-4b82-a919-305b358a05d0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.208408 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f5b0814-3abb-4b82-a919-305b358a05d0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.209963 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f5b0814-3abb-4b82-a919-305b358a05d0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.211386 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.211419 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/71ec0928dba49f1a5f0a48b1d05d2ea0a558615684f580940f9809492530205a/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.231344 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pswcp\" (UniqueName: \"kubernetes.io/projected/7f5b0814-3abb-4b82-a919-305b358a05d0-kube-api-access-pswcp\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.237743 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-server-conf" (OuterVolumeSpecName: "server-conf") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.260363 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b6635d4a-dbaa-438f-88ee-01cc40d29e9d\") pod \"rabbitmq-server-0\" (UID: \"7f5b0814-3abb-4b82-a919-305b358a05d0\") " pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.267925 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8c4b5659-ec0f-4746-91e1-9bb739a705f8" (UID: "8c4b5659-ec0f-4746-91e1-9bb739a705f8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.294604 4772 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c4b5659-ec0f-4746-91e1-9bb739a705f8-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.294659 4772 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c4b5659-ec0f-4746-91e1-9bb739a705f8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.294749 4772 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") on node \"crc\" " Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.318458 4772 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.318652 4772 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-14885645-cecf-47af-adfb-df57d18543c7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7") on node "crc" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.396184 4772 reconciler_common.go:293] "Volume detached for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") on node \"crc\" DevicePath \"\"" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.561672 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.770529 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.844950 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c4b5659-ec0f-4746-91e1-9bb739a705f8","Type":"ContainerDied","Data":"2adaa123cb76646112178b4a9220585e871383452bc69de15f092e5c47204308"} Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.845003 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.845020 4772 scope.go:117] "RemoveContainer" containerID="7bf4f3f3da35f6d91e034424806dbbed8b579cdf92ababc36cd76573598996ec" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.853063 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f5b0814-3abb-4b82-a919-305b358a05d0","Type":"ContainerStarted","Data":"4a47535a0e06f704844a0025fa6abc67a5e94c4aeccfc253e5a3966edbb7fdc9"} Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.871574 4772 scope.go:117] "RemoveContainer" containerID="74e104ae1b1709cb905ee6c27d3bba4e9aa3c6ee05e9468899c78bf2f6ad38f2" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.914797 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.925589 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.937301 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:57:26 crc kubenswrapper[4772]: E1122 11:57:26.937962 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerName="setup-container" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.938476 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerName="setup-container" Nov 22 11:57:26 crc kubenswrapper[4772]: E1122 11:57:26.938613 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerName="rabbitmq" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.938705 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerName="rabbitmq" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.939129 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" containerName="rabbitmq" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.940323 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.944661 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.944912 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.945104 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.945340 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-75p7n" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.945483 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 11:57:26 crc kubenswrapper[4772]: I1122 11:57:26.954716 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007530 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007583 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007639 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007668 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007698 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007729 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfrz\" (UniqueName: \"kubernetes.io/projected/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-kube-api-access-bzfrz\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007746 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007769 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.007803 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109538 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109603 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109648 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109688 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfrz\" (UniqueName: \"kubernetes.io/projected/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-kube-api-access-bzfrz\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109713 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109771 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109843 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.109871 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.110293 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.111064 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.111224 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.111625 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.114875 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.114911 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3660f7b661c9b03f43e00ce628cf8cce098e54bfb4198c614f0c94c33743d21/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.115024 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.118317 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.119683 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.127474 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfrz\" (UniqueName: \"kubernetes.io/projected/fa9a9bc0-2f06-4490-bd06-eae00af9c7d0-kube-api-access-bzfrz\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.154164 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-14885645-cecf-47af-adfb-df57d18543c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14885645-cecf-47af-adfb-df57d18543c7\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.268755 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.428978 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4b5659-ec0f-4746-91e1-9bb739a705f8" path="/var/lib/kubelet/pods/8c4b5659-ec0f-4746-91e1-9bb739a705f8/volumes" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.430312 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25f971f-d811-4369-9c23-dbb1243592e9" path="/var/lib/kubelet/pods/f25f971f-d811-4369-9c23-dbb1243592e9/volumes" Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.713175 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 11:57:27 crc kubenswrapper[4772]: W1122 11:57:27.742583 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9a9bc0_2f06_4490_bd06_eae00af9c7d0.slice/crio-6be761635d8930253e23ae5e3f5c71407cbbade3e2067ad58a616ceeefb5b449 WatchSource:0}: Error finding container 6be761635d8930253e23ae5e3f5c71407cbbade3e2067ad58a616ceeefb5b449: Status 404 returned error can't find the container with id 6be761635d8930253e23ae5e3f5c71407cbbade3e2067ad58a616ceeefb5b449 Nov 22 11:57:27 crc kubenswrapper[4772]: I1122 11:57:27.878106 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0","Type":"ContainerStarted","Data":"6be761635d8930253e23ae5e3f5c71407cbbade3e2067ad58a616ceeefb5b449"} Nov 22 11:57:28 crc kubenswrapper[4772]: I1122 11:57:28.891173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f5b0814-3abb-4b82-a919-305b358a05d0","Type":"ContainerStarted","Data":"68e5c945ffd3a7cec619fd60cb4b635df3b07b3502b24d98f27d9c19a103e14d"} Nov 22 11:57:29 crc kubenswrapper[4772]: I1122 11:57:29.903883 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0","Type":"ContainerStarted","Data":"e53947c6adbcb7826681da9fb1007db44971b2ebdba3bd6ddd2c751cdeb61f57"} Nov 22 11:58:01 crc kubenswrapper[4772]: I1122 11:58:01.196152 4772 generic.go:334] "Generic (PLEG): container finished" podID="7f5b0814-3abb-4b82-a919-305b358a05d0" containerID="68e5c945ffd3a7cec619fd60cb4b635df3b07b3502b24d98f27d9c19a103e14d" exitCode=0 Nov 22 11:58:01 crc kubenswrapper[4772]: I1122 11:58:01.196294 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f5b0814-3abb-4b82-a919-305b358a05d0","Type":"ContainerDied","Data":"68e5c945ffd3a7cec619fd60cb4b635df3b07b3502b24d98f27d9c19a103e14d"} Nov 22 11:58:02 crc kubenswrapper[4772]: I1122 11:58:02.217399 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f5b0814-3abb-4b82-a919-305b358a05d0","Type":"ContainerStarted","Data":"bcf389fed0e8606fc592ec3817b5761ee3598638e440e47eb8b9fbb3b4928a95"} Nov 22 11:58:02 crc kubenswrapper[4772]: I1122 11:58:02.221333 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 11:58:02 crc kubenswrapper[4772]: I1122 11:58:02.225441 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa9a9bc0-2f06-4490-bd06-eae00af9c7d0" containerID="e53947c6adbcb7826681da9fb1007db44971b2ebdba3bd6ddd2c751cdeb61f57" exitCode=0 Nov 22 11:58:02 crc kubenswrapper[4772]: I1122 11:58:02.225539 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0","Type":"ContainerDied","Data":"e53947c6adbcb7826681da9fb1007db44971b2ebdba3bd6ddd2c751cdeb61f57"} Nov 22 11:58:02 crc kubenswrapper[4772]: I1122 11:58:02.268953 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.268929915 podStartE2EDuration="37.268929915s" podCreationTimestamp="2025-11-22 11:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:58:02.258367735 +0000 UTC m=+4802.497812239" watchObservedRunningTime="2025-11-22 11:58:02.268929915 +0000 UTC m=+4802.508374409" Nov 22 11:58:03 crc kubenswrapper[4772]: I1122 11:58:03.236278 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa9a9bc0-2f06-4490-bd06-eae00af9c7d0","Type":"ContainerStarted","Data":"7d3c9ffeea521af2e045d2ae8d016c4b5423da868e6d410b852d4d2f16fe9198"} Nov 22 11:58:03 crc kubenswrapper[4772]: I1122 11:58:03.236454 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:58:03 crc kubenswrapper[4772]: I1122 11:58:03.262404 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.262378509 podStartE2EDuration="37.262378509s" podCreationTimestamp="2025-11-22 11:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:58:03.257299195 +0000 UTC m=+4803.496743689" watchObservedRunningTime="2025-11-22 11:58:03.262378509 +0000 UTC m=+4803.501823003" Nov 22 11:58:16 crc kubenswrapper[4772]: I1122 11:58:16.565682 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 11:58:17 crc kubenswrapper[4772]: I1122 11:58:17.272396 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.246667 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.248262 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.250249 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8x6hp" Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.258711 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.347856 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmvp7\" (UniqueName: \"kubernetes.io/projected/227b8156-8150-4202-96d2-d6028cc2747c-kube-api-access-rmvp7\") pod \"mariadb-client-1-default\" (UID: \"227b8156-8150-4202-96d2-d6028cc2747c\") " pod="openstack/mariadb-client-1-default" Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.449194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmvp7\" (UniqueName: \"kubernetes.io/projected/227b8156-8150-4202-96d2-d6028cc2747c-kube-api-access-rmvp7\") pod \"mariadb-client-1-default\" (UID: \"227b8156-8150-4202-96d2-d6028cc2747c\") " pod="openstack/mariadb-client-1-default" Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.469976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmvp7\" (UniqueName: \"kubernetes.io/projected/227b8156-8150-4202-96d2-d6028cc2747c-kube-api-access-rmvp7\") pod \"mariadb-client-1-default\" (UID: \"227b8156-8150-4202-96d2-d6028cc2747c\") " pod="openstack/mariadb-client-1-default" Nov 22 11:58:25 crc kubenswrapper[4772]: I1122 11:58:25.575201 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 11:58:26 crc kubenswrapper[4772]: I1122 11:58:26.071330 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 11:58:26 crc kubenswrapper[4772]: I1122 11:58:26.450091 4772 generic.go:334] "Generic (PLEG): container finished" podID="227b8156-8150-4202-96d2-d6028cc2747c" containerID="8c9e4b52463696717e66a5609483be6a6703f7c6cbca99b848045f5fcf7735dc" exitCode=0 Nov 22 11:58:26 crc kubenswrapper[4772]: I1122 11:58:26.450140 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"227b8156-8150-4202-96d2-d6028cc2747c","Type":"ContainerDied","Data":"8c9e4b52463696717e66a5609483be6a6703f7c6cbca99b848045f5fcf7735dc"} Nov 22 11:58:26 crc kubenswrapper[4772]: I1122 11:58:26.450173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"227b8156-8150-4202-96d2-d6028cc2747c","Type":"ContainerStarted","Data":"258ed977987509bd8bedb143152cd74cb8b8b824d1656712efbd02ba494dc22d"} Nov 22 11:58:27 crc kubenswrapper[4772]: I1122 11:58:27.866738 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 11:58:27 crc kubenswrapper[4772]: I1122 11:58:27.897278 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_227b8156-8150-4202-96d2-d6028cc2747c/mariadb-client-1-default/0.log" Nov 22 11:58:27 crc kubenswrapper[4772]: I1122 11:58:27.924165 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 11:58:27 crc kubenswrapper[4772]: I1122 11:58:27.930456 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 11:58:27 crc kubenswrapper[4772]: I1122 11:58:27.986071 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmvp7\" (UniqueName: \"kubernetes.io/projected/227b8156-8150-4202-96d2-d6028cc2747c-kube-api-access-rmvp7\") pod \"227b8156-8150-4202-96d2-d6028cc2747c\" (UID: \"227b8156-8150-4202-96d2-d6028cc2747c\") " Nov 22 11:58:27 crc kubenswrapper[4772]: I1122 11:58:27.995270 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227b8156-8150-4202-96d2-d6028cc2747c-kube-api-access-rmvp7" (OuterVolumeSpecName: "kube-api-access-rmvp7") pod "227b8156-8150-4202-96d2-d6028cc2747c" (UID: "227b8156-8150-4202-96d2-d6028cc2747c"). InnerVolumeSpecName "kube-api-access-rmvp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.087524 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmvp7\" (UniqueName: \"kubernetes.io/projected/227b8156-8150-4202-96d2-d6028cc2747c-kube-api-access-rmvp7\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.370387 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 11:58:28 crc kubenswrapper[4772]: E1122 11:58:28.370851 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227b8156-8150-4202-96d2-d6028cc2747c" containerName="mariadb-client-1-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.370875 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="227b8156-8150-4202-96d2-d6028cc2747c" containerName="mariadb-client-1-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.371103 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="227b8156-8150-4202-96d2-d6028cc2747c" containerName="mariadb-client-1-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.371831 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.381800 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.465299 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="258ed977987509bd8bedb143152cd74cb8b8b824d1656712efbd02ba494dc22d" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.465360 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.493576 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4dld\" (UniqueName: \"kubernetes.io/projected/648cab8f-8627-442d-b9db-4dbd5cf65cfa-kube-api-access-h4dld\") pod \"mariadb-client-2-default\" (UID: \"648cab8f-8627-442d-b9db-4dbd5cf65cfa\") " pod="openstack/mariadb-client-2-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.594703 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4dld\" (UniqueName: \"kubernetes.io/projected/648cab8f-8627-442d-b9db-4dbd5cf65cfa-kube-api-access-h4dld\") pod \"mariadb-client-2-default\" (UID: \"648cab8f-8627-442d-b9db-4dbd5cf65cfa\") " pod="openstack/mariadb-client-2-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.621308 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4dld\" (UniqueName: \"kubernetes.io/projected/648cab8f-8627-442d-b9db-4dbd5cf65cfa-kube-api-access-h4dld\") pod \"mariadb-client-2-default\" (UID: \"648cab8f-8627-442d-b9db-4dbd5cf65cfa\") " pod="openstack/mariadb-client-2-default" Nov 22 11:58:28 crc kubenswrapper[4772]: I1122 11:58:28.690192 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 11:58:29 crc kubenswrapper[4772]: I1122 11:58:29.110633 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 11:58:29 crc kubenswrapper[4772]: W1122 11:58:29.115503 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod648cab8f_8627_442d_b9db_4dbd5cf65cfa.slice/crio-722738f9ce8dbe9ddf5c6aaf36e0d5c7d869a42953b4595c92cc982324b68f78 WatchSource:0}: Error finding container 722738f9ce8dbe9ddf5c6aaf36e0d5c7d869a42953b4595c92cc982324b68f78: Status 404 returned error can't find the container with id 722738f9ce8dbe9ddf5c6aaf36e0d5c7d869a42953b4595c92cc982324b68f78 Nov 22 11:58:29 crc kubenswrapper[4772]: I1122 11:58:29.423659 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227b8156-8150-4202-96d2-d6028cc2747c" path="/var/lib/kubelet/pods/227b8156-8150-4202-96d2-d6028cc2747c/volumes" Nov 22 11:58:29 crc kubenswrapper[4772]: I1122 11:58:29.473861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"648cab8f-8627-442d-b9db-4dbd5cf65cfa","Type":"ContainerStarted","Data":"dc2efb42d7ab7681aba3fe33932da0a94bc10e6dfb84a4ee4210f83d99652036"} Nov 22 11:58:29 crc kubenswrapper[4772]: I1122 11:58:29.473922 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"648cab8f-8627-442d-b9db-4dbd5cf65cfa","Type":"ContainerStarted","Data":"722738f9ce8dbe9ddf5c6aaf36e0d5c7d869a42953b4595c92cc982324b68f78"} Nov 22 11:58:29 crc kubenswrapper[4772]: I1122 11:58:29.485927 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.485855414 podStartE2EDuration="1.485855414s" podCreationTimestamp="2025-11-22 11:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:58:29.485427064 +0000 UTC m=+4829.724871578" watchObservedRunningTime="2025-11-22 11:58:29.485855414 +0000 UTC m=+4829.725299908" Nov 22 11:58:30 crc kubenswrapper[4772]: I1122 11:58:30.487230 4772 generic.go:334] "Generic (PLEG): container finished" podID="648cab8f-8627-442d-b9db-4dbd5cf65cfa" containerID="dc2efb42d7ab7681aba3fe33932da0a94bc10e6dfb84a4ee4210f83d99652036" exitCode=1 Nov 22 11:58:30 crc kubenswrapper[4772]: I1122 11:58:30.487349 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"648cab8f-8627-442d-b9db-4dbd5cf65cfa","Type":"ContainerDied","Data":"dc2efb42d7ab7681aba3fe33932da0a94bc10e6dfb84a4ee4210f83d99652036"} Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.110409 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.147636 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.154998 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.249732 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4dld\" (UniqueName: \"kubernetes.io/projected/648cab8f-8627-442d-b9db-4dbd5cf65cfa-kube-api-access-h4dld\") pod \"648cab8f-8627-442d-b9db-4dbd5cf65cfa\" (UID: \"648cab8f-8627-442d-b9db-4dbd5cf65cfa\") " Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.256511 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648cab8f-8627-442d-b9db-4dbd5cf65cfa-kube-api-access-h4dld" (OuterVolumeSpecName: "kube-api-access-h4dld") pod "648cab8f-8627-442d-b9db-4dbd5cf65cfa" (UID: "648cab8f-8627-442d-b9db-4dbd5cf65cfa"). InnerVolumeSpecName "kube-api-access-h4dld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.352102 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4dld\" (UniqueName: \"kubernetes.io/projected/648cab8f-8627-442d-b9db-4dbd5cf65cfa-kube-api-access-h4dld\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.506878 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="722738f9ce8dbe9ddf5c6aaf36e0d5c7d869a42953b4595c92cc982324b68f78" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.506957 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.695988 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 22 11:58:32 crc kubenswrapper[4772]: E1122 11:58:32.696390 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648cab8f-8627-442d-b9db-4dbd5cf65cfa" containerName="mariadb-client-2-default" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.696413 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="648cab8f-8627-442d-b9db-4dbd5cf65cfa" containerName="mariadb-client-2-default" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.696617 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="648cab8f-8627-442d-b9db-4dbd5cf65cfa" containerName="mariadb-client-2-default" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.697240 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.703066 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8x6hp" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.711268 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.860407 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xkps\" (UniqueName: \"kubernetes.io/projected/7a93dba7-0f96-44a8-856f-24b565baf4e1-kube-api-access-6xkps\") pod \"mariadb-client-1\" (UID: \"7a93dba7-0f96-44a8-856f-24b565baf4e1\") " pod="openstack/mariadb-client-1" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.962379 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xkps\" (UniqueName: \"kubernetes.io/projected/7a93dba7-0f96-44a8-856f-24b565baf4e1-kube-api-access-6xkps\") pod \"mariadb-client-1\" (UID: \"7a93dba7-0f96-44a8-856f-24b565baf4e1\") " pod="openstack/mariadb-client-1" Nov 22 11:58:32 crc kubenswrapper[4772]: I1122 11:58:32.982034 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xkps\" (UniqueName: \"kubernetes.io/projected/7a93dba7-0f96-44a8-856f-24b565baf4e1-kube-api-access-6xkps\") pod \"mariadb-client-1\" (UID: \"7a93dba7-0f96-44a8-856f-24b565baf4e1\") " pod="openstack/mariadb-client-1" Nov 22 11:58:33 crc kubenswrapper[4772]: I1122 11:58:33.015108 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 11:58:33 crc kubenswrapper[4772]: I1122 11:58:33.424563 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648cab8f-8627-442d-b9db-4dbd5cf65cfa" path="/var/lib/kubelet/pods/648cab8f-8627-442d-b9db-4dbd5cf65cfa/volumes" Nov 22 11:58:33 crc kubenswrapper[4772]: I1122 11:58:33.532107 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 11:58:33 crc kubenswrapper[4772]: W1122 11:58:33.544938 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a93dba7_0f96_44a8_856f_24b565baf4e1.slice/crio-dba9b1204b64c236ab6846916d90ceb517f0ca7d84d8f270343f29cc1f30f96e WatchSource:0}: Error finding container dba9b1204b64c236ab6846916d90ceb517f0ca7d84d8f270343f29cc1f30f96e: Status 404 returned error can't find the container with id dba9b1204b64c236ab6846916d90ceb517f0ca7d84d8f270343f29cc1f30f96e Nov 22 11:58:34 crc kubenswrapper[4772]: I1122 11:58:34.531732 4772 generic.go:334] "Generic (PLEG): container finished" podID="7a93dba7-0f96-44a8-856f-24b565baf4e1" containerID="c2ab57c690078e98e08f619738ab821d658a520e461a345900964104e83ad7ac" exitCode=0 Nov 22 11:58:34 crc kubenswrapper[4772]: I1122 11:58:34.532116 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"7a93dba7-0f96-44a8-856f-24b565baf4e1","Type":"ContainerDied","Data":"c2ab57c690078e98e08f619738ab821d658a520e461a345900964104e83ad7ac"} Nov 22 11:58:34 crc kubenswrapper[4772]: I1122 11:58:34.532154 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"7a93dba7-0f96-44a8-856f-24b565baf4e1","Type":"ContainerStarted","Data":"dba9b1204b64c236ab6846916d90ceb517f0ca7d84d8f270343f29cc1f30f96e"} Nov 22 11:58:35 crc kubenswrapper[4772]: I1122 11:58:35.885431 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 11:58:35 crc kubenswrapper[4772]: I1122 11:58:35.903109 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_7a93dba7-0f96-44a8-856f-24b565baf4e1/mariadb-client-1/0.log" Nov 22 11:58:35 crc kubenswrapper[4772]: I1122 11:58:35.931513 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 11:58:35 crc kubenswrapper[4772]: I1122 11:58:35.939154 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.014931 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xkps\" (UniqueName: \"kubernetes.io/projected/7a93dba7-0f96-44a8-856f-24b565baf4e1-kube-api-access-6xkps\") pod \"7a93dba7-0f96-44a8-856f-24b565baf4e1\" (UID: \"7a93dba7-0f96-44a8-856f-24b565baf4e1\") " Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.020809 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a93dba7-0f96-44a8-856f-24b565baf4e1-kube-api-access-6xkps" (OuterVolumeSpecName: "kube-api-access-6xkps") pod "7a93dba7-0f96-44a8-856f-24b565baf4e1" (UID: "7a93dba7-0f96-44a8-856f-24b565baf4e1"). InnerVolumeSpecName "kube-api-access-6xkps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.116721 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xkps\" (UniqueName: \"kubernetes.io/projected/7a93dba7-0f96-44a8-856f-24b565baf4e1-kube-api-access-6xkps\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.336479 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 11:58:36 crc kubenswrapper[4772]: E1122 11:58:36.336917 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a93dba7-0f96-44a8-856f-24b565baf4e1" containerName="mariadb-client-1" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.336947 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a93dba7-0f96-44a8-856f-24b565baf4e1" containerName="mariadb-client-1" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.337186 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a93dba7-0f96-44a8-856f-24b565baf4e1" containerName="mariadb-client-1" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.337844 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.346819 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.420932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd948\" (UniqueName: \"kubernetes.io/projected/dade112e-921f-4049-ab87-8ee8b12c1386-kube-api-access-wd948\") pod \"mariadb-client-4-default\" (UID: \"dade112e-921f-4049-ab87-8ee8b12c1386\") " pod="openstack/mariadb-client-4-default" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.522640 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd948\" (UniqueName: \"kubernetes.io/projected/dade112e-921f-4049-ab87-8ee8b12c1386-kube-api-access-wd948\") pod \"mariadb-client-4-default\" (UID: \"dade112e-921f-4049-ab87-8ee8b12c1386\") " pod="openstack/mariadb-client-4-default" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.545885 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd948\" (UniqueName: \"kubernetes.io/projected/dade112e-921f-4049-ab87-8ee8b12c1386-kube-api-access-wd948\") pod \"mariadb-client-4-default\" (UID: \"dade112e-921f-4049-ab87-8ee8b12c1386\") " pod="openstack/mariadb-client-4-default" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.551802 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dba9b1204b64c236ab6846916d90ceb517f0ca7d84d8f270343f29cc1f30f96e" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.551875 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 11:58:36 crc kubenswrapper[4772]: I1122 11:58:36.664809 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 11:58:37 crc kubenswrapper[4772]: I1122 11:58:37.195462 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 11:58:37 crc kubenswrapper[4772]: W1122 11:58:37.202619 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddade112e_921f_4049_ab87_8ee8b12c1386.slice/crio-b82f4cb572fd8e5d86ff6ad9f4a845f7edacd881ce8d5d944e053479f62471d6 WatchSource:0}: Error finding container b82f4cb572fd8e5d86ff6ad9f4a845f7edacd881ce8d5d944e053479f62471d6: Status 404 returned error can't find the container with id b82f4cb572fd8e5d86ff6ad9f4a845f7edacd881ce8d5d944e053479f62471d6 Nov 22 11:58:37 crc kubenswrapper[4772]: I1122 11:58:37.422935 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a93dba7-0f96-44a8-856f-24b565baf4e1" path="/var/lib/kubelet/pods/7a93dba7-0f96-44a8-856f-24b565baf4e1/volumes" Nov 22 11:58:37 crc kubenswrapper[4772]: I1122 11:58:37.562625 4772 generic.go:334] "Generic (PLEG): container finished" podID="dade112e-921f-4049-ab87-8ee8b12c1386" containerID="30b20f6e8fd08c27232845345710d500cfc7d15b375922956d10e878d79f85a2" exitCode=0 Nov 22 11:58:37 crc kubenswrapper[4772]: I1122 11:58:37.562682 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"dade112e-921f-4049-ab87-8ee8b12c1386","Type":"ContainerDied","Data":"30b20f6e8fd08c27232845345710d500cfc7d15b375922956d10e878d79f85a2"} Nov 22 11:58:37 crc kubenswrapper[4772]: I1122 11:58:37.562712 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"dade112e-921f-4049-ab87-8ee8b12c1386","Type":"ContainerStarted","Data":"b82f4cb572fd8e5d86ff6ad9f4a845f7edacd881ce8d5d944e053479f62471d6"} Nov 22 11:58:38 crc kubenswrapper[4772]: I1122 11:58:38.984317 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.005942 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_dade112e-921f-4049-ab87-8ee8b12c1386/mariadb-client-4-default/0.log" Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.031841 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.037008 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.075270 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd948\" (UniqueName: \"kubernetes.io/projected/dade112e-921f-4049-ab87-8ee8b12c1386-kube-api-access-wd948\") pod \"dade112e-921f-4049-ab87-8ee8b12c1386\" (UID: \"dade112e-921f-4049-ab87-8ee8b12c1386\") " Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.080642 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dade112e-921f-4049-ab87-8ee8b12c1386-kube-api-access-wd948" (OuterVolumeSpecName: "kube-api-access-wd948") pod "dade112e-921f-4049-ab87-8ee8b12c1386" (UID: "dade112e-921f-4049-ab87-8ee8b12c1386"). InnerVolumeSpecName "kube-api-access-wd948". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.177751 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd948\" (UniqueName: \"kubernetes.io/projected/dade112e-921f-4049-ab87-8ee8b12c1386-kube-api-access-wd948\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.424221 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dade112e-921f-4049-ab87-8ee8b12c1386" path="/var/lib/kubelet/pods/dade112e-921f-4049-ab87-8ee8b12c1386/volumes" Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.584829 4772 scope.go:117] "RemoveContainer" containerID="30b20f6e8fd08c27232845345710d500cfc7d15b375922956d10e878d79f85a2" Nov 22 11:58:39 crc kubenswrapper[4772]: I1122 11:58:39.585174 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.476291 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 11:58:43 crc kubenswrapper[4772]: E1122 11:58:43.477288 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dade112e-921f-4049-ab87-8ee8b12c1386" containerName="mariadb-client-4-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.477339 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="dade112e-921f-4049-ab87-8ee8b12c1386" containerName="mariadb-client-4-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.477595 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="dade112e-921f-4049-ab87-8ee8b12c1386" containerName="mariadb-client-4-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.479631 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.486207 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8x6hp" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.488536 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.552524 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqs9z\" (UniqueName: \"kubernetes.io/projected/fa74ec38-befd-4f73-8056-b522c5a0c50c-kube-api-access-tqs9z\") pod \"mariadb-client-5-default\" (UID: \"fa74ec38-befd-4f73-8056-b522c5a0c50c\") " pod="openstack/mariadb-client-5-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.655206 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqs9z\" (UniqueName: \"kubernetes.io/projected/fa74ec38-befd-4f73-8056-b522c5a0c50c-kube-api-access-tqs9z\") pod \"mariadb-client-5-default\" (UID: \"fa74ec38-befd-4f73-8056-b522c5a0c50c\") " pod="openstack/mariadb-client-5-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.741204 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqs9z\" (UniqueName: \"kubernetes.io/projected/fa74ec38-befd-4f73-8056-b522c5a0c50c-kube-api-access-tqs9z\") pod \"mariadb-client-5-default\" (UID: \"fa74ec38-befd-4f73-8056-b522c5a0c50c\") " pod="openstack/mariadb-client-5-default" Nov 22 11:58:43 crc kubenswrapper[4772]: I1122 11:58:43.816645 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 11:58:44 crc kubenswrapper[4772]: I1122 11:58:44.429178 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 11:58:44 crc kubenswrapper[4772]: I1122 11:58:44.629549 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"fa74ec38-befd-4f73-8056-b522c5a0c50c","Type":"ContainerStarted","Data":"7072e92852063f0f451d32ca336957c89800a63fb7f5f0aae9694f6e4424d0e0"} Nov 22 11:58:45 crc kubenswrapper[4772]: I1122 11:58:45.654106 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa74ec38-befd-4f73-8056-b522c5a0c50c" containerID="637e70c78c6188e193cea2145d639274dcd2b75eb476bcb18f3008484652213e" exitCode=0 Nov 22 11:58:45 crc kubenswrapper[4772]: I1122 11:58:45.654411 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"fa74ec38-befd-4f73-8056-b522c5a0c50c","Type":"ContainerDied","Data":"637e70c78c6188e193cea2145d639274dcd2b75eb476bcb18f3008484652213e"} Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.059716 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.088270 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_fa74ec38-befd-4f73-8056-b522c5a0c50c/mariadb-client-5-default/0.log" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.108967 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqs9z\" (UniqueName: \"kubernetes.io/projected/fa74ec38-befd-4f73-8056-b522c5a0c50c-kube-api-access-tqs9z\") pod \"fa74ec38-befd-4f73-8056-b522c5a0c50c\" (UID: \"fa74ec38-befd-4f73-8056-b522c5a0c50c\") " Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.118370 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.124534 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa74ec38-befd-4f73-8056-b522c5a0c50c-kube-api-access-tqs9z" (OuterVolumeSpecName: "kube-api-access-tqs9z") pod "fa74ec38-befd-4f73-8056-b522c5a0c50c" (UID: "fa74ec38-befd-4f73-8056-b522c5a0c50c"). InnerVolumeSpecName "kube-api-access-tqs9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.127871 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.210757 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqs9z\" (UniqueName: \"kubernetes.io/projected/fa74ec38-befd-4f73-8056-b522c5a0c50c-kube-api-access-tqs9z\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.269061 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 11:58:47 crc kubenswrapper[4772]: E1122 11:58:47.269537 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa74ec38-befd-4f73-8056-b522c5a0c50c" containerName="mariadb-client-5-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.269560 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa74ec38-befd-4f73-8056-b522c5a0c50c" containerName="mariadb-client-5-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.269724 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa74ec38-befd-4f73-8056-b522c5a0c50c" containerName="mariadb-client-5-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.270373 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.283151 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.414905 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9827r\" (UniqueName: \"kubernetes.io/projected/15890935-a6dd-4d13-9ffe-6d068573f340-kube-api-access-9827r\") pod \"mariadb-client-6-default\" (UID: \"15890935-a6dd-4d13-9ffe-6d068573f340\") " pod="openstack/mariadb-client-6-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.426937 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa74ec38-befd-4f73-8056-b522c5a0c50c" path="/var/lib/kubelet/pods/fa74ec38-befd-4f73-8056-b522c5a0c50c/volumes" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.517311 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9827r\" (UniqueName: \"kubernetes.io/projected/15890935-a6dd-4d13-9ffe-6d068573f340-kube-api-access-9827r\") pod \"mariadb-client-6-default\" (UID: \"15890935-a6dd-4d13-9ffe-6d068573f340\") " pod="openstack/mariadb-client-6-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.566606 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9827r\" (UniqueName: \"kubernetes.io/projected/15890935-a6dd-4d13-9ffe-6d068573f340-kube-api-access-9827r\") pod \"mariadb-client-6-default\" (UID: \"15890935-a6dd-4d13-9ffe-6d068573f340\") " pod="openstack/mariadb-client-6-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.607978 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.673819 4772 scope.go:117] "RemoveContainer" containerID="637e70c78c6188e193cea2145d639274dcd2b75eb476bcb18f3008484652213e" Nov 22 11:58:47 crc kubenswrapper[4772]: I1122 11:58:47.673901 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 11:58:48 crc kubenswrapper[4772]: I1122 11:58:48.134566 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 11:58:48 crc kubenswrapper[4772]: I1122 11:58:48.690872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"15890935-a6dd-4d13-9ffe-6d068573f340","Type":"ContainerStarted","Data":"cb9ca13114d1be3da482818952e419f1e0b5f3d22f462126e1dfa6f39ed6890d"} Nov 22 11:58:48 crc kubenswrapper[4772]: I1122 11:58:48.690964 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"15890935-a6dd-4d13-9ffe-6d068573f340","Type":"ContainerStarted","Data":"6f98e6ebc00d64dfeb77569fd0e4efe1b13f58679910f9c435da0bd81bae68d2"} Nov 22 11:58:48 crc kubenswrapper[4772]: I1122 11:58:48.716321 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.716286292 podStartE2EDuration="1.716286292s" podCreationTimestamp="2025-11-22 11:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 11:58:48.707031505 +0000 UTC m=+4848.946475999" watchObservedRunningTime="2025-11-22 11:58:48.716286292 +0000 UTC m=+4848.955730826" Nov 22 11:58:49 crc kubenswrapper[4772]: I1122 11:58:49.701304 4772 generic.go:334] "Generic (PLEG): container finished" podID="15890935-a6dd-4d13-9ffe-6d068573f340" containerID="cb9ca13114d1be3da482818952e419f1e0b5f3d22f462126e1dfa6f39ed6890d" exitCode=1 Nov 22 11:58:49 crc kubenswrapper[4772]: I1122 11:58:49.701430 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"15890935-a6dd-4d13-9ffe-6d068573f340","Type":"ContainerDied","Data":"cb9ca13114d1be3da482818952e419f1e0b5f3d22f462126e1dfa6f39ed6890d"} Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.150920 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.194676 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.202693 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.291665 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9827r\" (UniqueName: \"kubernetes.io/projected/15890935-a6dd-4d13-9ffe-6d068573f340-kube-api-access-9827r\") pod \"15890935-a6dd-4d13-9ffe-6d068573f340\" (UID: \"15890935-a6dd-4d13-9ffe-6d068573f340\") " Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.302281 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15890935-a6dd-4d13-9ffe-6d068573f340-kube-api-access-9827r" (OuterVolumeSpecName: "kube-api-access-9827r") pod "15890935-a6dd-4d13-9ffe-6d068573f340" (UID: "15890935-a6dd-4d13-9ffe-6d068573f340"). InnerVolumeSpecName "kube-api-access-9827r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.375892 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 11:58:51 crc kubenswrapper[4772]: E1122 11:58:51.376357 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15890935-a6dd-4d13-9ffe-6d068573f340" containerName="mariadb-client-6-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.376377 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="15890935-a6dd-4d13-9ffe-6d068573f340" containerName="mariadb-client-6-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.376569 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="15890935-a6dd-4d13-9ffe-6d068573f340" containerName="mariadb-client-6-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.377199 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.388126 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.396329 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9827r\" (UniqueName: \"kubernetes.io/projected/15890935-a6dd-4d13-9ffe-6d068573f340-kube-api-access-9827r\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.427605 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15890935-a6dd-4d13-9ffe-6d068573f340" path="/var/lib/kubelet/pods/15890935-a6dd-4d13-9ffe-6d068573f340/volumes" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.497530 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng949\" (UniqueName: \"kubernetes.io/projected/7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a-kube-api-access-ng949\") pod \"mariadb-client-7-default\" (UID: \"7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a\") " pod="openstack/mariadb-client-7-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.600036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng949\" (UniqueName: \"kubernetes.io/projected/7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a-kube-api-access-ng949\") pod \"mariadb-client-7-default\" (UID: \"7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a\") " pod="openstack/mariadb-client-7-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.619976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng949\" (UniqueName: \"kubernetes.io/projected/7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a-kube-api-access-ng949\") pod \"mariadb-client-7-default\" (UID: \"7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a\") " pod="openstack/mariadb-client-7-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.712453 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.734501 4772 scope.go:117] "RemoveContainer" containerID="cb9ca13114d1be3da482818952e419f1e0b5f3d22f462126e1dfa6f39ed6890d" Nov 22 11:58:51 crc kubenswrapper[4772]: I1122 11:58:51.734633 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 11:58:52 crc kubenswrapper[4772]: I1122 11:58:52.272556 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 11:58:52 crc kubenswrapper[4772]: I1122 11:58:52.747413 4772 generic.go:334] "Generic (PLEG): container finished" podID="7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a" containerID="382f5dc50358e4c4c2f7c13ad7bf635d236e3d34fffab7c5642d0c52189fdb7f" exitCode=0 Nov 22 11:58:52 crc kubenswrapper[4772]: I1122 11:58:52.747524 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a","Type":"ContainerDied","Data":"382f5dc50358e4c4c2f7c13ad7bf635d236e3d34fffab7c5642d0c52189fdb7f"} Nov 22 11:58:52 crc kubenswrapper[4772]: I1122 11:58:52.747568 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a","Type":"ContainerStarted","Data":"eb5e45e3cd44fed11d90b4b2e6fa9ce3177edf9306599436e9cc5b9a36533134"} Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.332276 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.363106 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a/mariadb-client-7-default/0.log" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.395719 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.406458 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.461537 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng949\" (UniqueName: \"kubernetes.io/projected/7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a-kube-api-access-ng949\") pod \"7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a\" (UID: \"7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a\") " Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.469324 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a-kube-api-access-ng949" (OuterVolumeSpecName: "kube-api-access-ng949") pod "7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a" (UID: "7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a"). InnerVolumeSpecName "kube-api-access-ng949". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.537178 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 22 11:58:54 crc kubenswrapper[4772]: E1122 11:58:54.537620 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a" containerName="mariadb-client-7-default" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.537643 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a" containerName="mariadb-client-7-default" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.537837 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a" containerName="mariadb-client-7-default" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.538519 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.550754 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.563649 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng949\" (UniqueName: \"kubernetes.io/projected/7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a-kube-api-access-ng949\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.665377 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/f3274dae-0342-41a6-9bee-37646cbbdf6b-kube-api-access-598dd\") pod \"mariadb-client-2\" (UID: \"f3274dae-0342-41a6-9bee-37646cbbdf6b\") " pod="openstack/mariadb-client-2" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.766696 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/f3274dae-0342-41a6-9bee-37646cbbdf6b-kube-api-access-598dd\") pod \"mariadb-client-2\" (UID: \"f3274dae-0342-41a6-9bee-37646cbbdf6b\") " pod="openstack/mariadb-client-2" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.768738 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5e45e3cd44fed11d90b4b2e6fa9ce3177edf9306599436e9cc5b9a36533134" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.768840 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.802178 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/f3274dae-0342-41a6-9bee-37646cbbdf6b-kube-api-access-598dd\") pod \"mariadb-client-2\" (UID: \"f3274dae-0342-41a6-9bee-37646cbbdf6b\") " pod="openstack/mariadb-client-2" Nov 22 11:58:54 crc kubenswrapper[4772]: I1122 11:58:54.863553 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 11:58:55 crc kubenswrapper[4772]: W1122 11:58:55.423703 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3274dae_0342_41a6_9bee_37646cbbdf6b.slice/crio-7c3982ae8771a00bd3fdaa14a31535d0f93b09beeeea138ce7482fbdb4f5cb45 WatchSource:0}: Error finding container 7c3982ae8771a00bd3fdaa14a31535d0f93b09beeeea138ce7482fbdb4f5cb45: Status 404 returned error can't find the container with id 7c3982ae8771a00bd3fdaa14a31535d0f93b09beeeea138ce7482fbdb4f5cb45 Nov 22 11:58:55 crc kubenswrapper[4772]: I1122 11:58:55.431194 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a" path="/var/lib/kubelet/pods/7835bcda-d6d3-4c3f-ba97-7e1a9c54c12a/volumes" Nov 22 11:58:55 crc kubenswrapper[4772]: I1122 11:58:55.432378 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 11:58:55 crc kubenswrapper[4772]: I1122 11:58:55.779829 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"f3274dae-0342-41a6-9bee-37646cbbdf6b","Type":"ContainerStarted","Data":"7c3982ae8771a00bd3fdaa14a31535d0f93b09beeeea138ce7482fbdb4f5cb45"} Nov 22 11:58:56 crc kubenswrapper[4772]: I1122 11:58:56.794095 4772 generic.go:334] "Generic (PLEG): container finished" podID="f3274dae-0342-41a6-9bee-37646cbbdf6b" containerID="40dcc86ebd8ece0a166a125b109c4027f3aea2e6137ecca0ae1e285c0a3d0a67" exitCode=0 Nov 22 11:58:56 crc kubenswrapper[4772]: I1122 11:58:56.794093 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"f3274dae-0342-41a6-9bee-37646cbbdf6b","Type":"ContainerDied","Data":"40dcc86ebd8ece0a166a125b109c4027f3aea2e6137ecca0ae1e285c0a3d0a67"} Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.251084 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.274187 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_f3274dae-0342-41a6-9bee-37646cbbdf6b/mariadb-client-2/0.log" Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.307461 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.314604 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.332153 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/f3274dae-0342-41a6-9bee-37646cbbdf6b-kube-api-access-598dd\") pod \"f3274dae-0342-41a6-9bee-37646cbbdf6b\" (UID: \"f3274dae-0342-41a6-9bee-37646cbbdf6b\") " Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.344031 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3274dae-0342-41a6-9bee-37646cbbdf6b-kube-api-access-598dd" (OuterVolumeSpecName: "kube-api-access-598dd") pod "f3274dae-0342-41a6-9bee-37646cbbdf6b" (UID: "f3274dae-0342-41a6-9bee-37646cbbdf6b"). InnerVolumeSpecName "kube-api-access-598dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.434118 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/f3274dae-0342-41a6-9bee-37646cbbdf6b-kube-api-access-598dd\") on node \"crc\" DevicePath \"\"" Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.815527 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3982ae8771a00bd3fdaa14a31535d0f93b09beeeea138ce7482fbdb4f5cb45" Nov 22 11:58:58 crc kubenswrapper[4772]: I1122 11:58:58.815686 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 11:58:59 crc kubenswrapper[4772]: I1122 11:58:59.428178 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3274dae-0342-41a6-9bee-37646cbbdf6b" path="/var/lib/kubelet/pods/f3274dae-0342-41a6-9bee-37646cbbdf6b/volumes" Nov 22 11:59:01 crc kubenswrapper[4772]: I1122 11:59:01.533287 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:59:01 crc kubenswrapper[4772]: I1122 11:59:01.533772 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 11:59:10 crc kubenswrapper[4772]: I1122 11:59:10.663096 4772 scope.go:117] "RemoveContainer" containerID="7c6bed596f1a51578cc7a885bbb88ddc69fea2097c265aca7d92017666c0f56a" Nov 22 11:59:31 crc kubenswrapper[4772]: I1122 11:59:31.533099 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 11:59:31 crc kubenswrapper[4772]: I1122 11:59:31.534002 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.165475 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b"] Nov 22 12:00:00 crc kubenswrapper[4772]: E1122 12:00:00.166832 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3274dae-0342-41a6-9bee-37646cbbdf6b" containerName="mariadb-client-2" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.166856 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3274dae-0342-41a6-9bee-37646cbbdf6b" containerName="mariadb-client-2" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.167098 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3274dae-0342-41a6-9bee-37646cbbdf6b" containerName="mariadb-client-2" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.168019 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.170350 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.171273 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.188807 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b"] Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.205780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-config-volume\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.205833 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-secret-volume\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.205959 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbql\" (UniqueName: \"kubernetes.io/projected/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-kube-api-access-vcbql\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.307954 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-config-volume\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.308012 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-secret-volume\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.308112 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcbql\" (UniqueName: \"kubernetes.io/projected/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-kube-api-access-vcbql\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.309619 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-config-volume\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.316852 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-secret-volume\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.326699 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcbql\" (UniqueName: \"kubernetes.io/projected/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-kube-api-access-vcbql\") pod \"collect-profiles-29396880-h269b\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.500357 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:00 crc kubenswrapper[4772]: I1122 12:00:00.980261 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b"] Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.413799 4772 generic.go:334] "Generic (PLEG): container finished" podID="13937ca4-2579-4c88-bfac-8cf50aeb2ffe" containerID="9fd02037dd8db2e11cc384d47dba4c6171992aeec48e9c9752cc14cec2dce0c1" exitCode=0 Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.424429 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" event={"ID":"13937ca4-2579-4c88-bfac-8cf50aeb2ffe","Type":"ContainerDied","Data":"9fd02037dd8db2e11cc384d47dba4c6171992aeec48e9c9752cc14cec2dce0c1"} Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.424478 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" event={"ID":"13937ca4-2579-4c88-bfac-8cf50aeb2ffe","Type":"ContainerStarted","Data":"e99c651c96212b2ffaf489a215f1459767cdf8e1f68f5cac2f49af37a4b3324e"} Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.532647 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.532706 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.532749 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.533191 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68d813d9b094f770a392ffde3e0c1a4b8d11b83d190b2a08e521c804f76c3e77"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:00:01 crc kubenswrapper[4772]: I1122 12:00:01.533252 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://68d813d9b094f770a392ffde3e0c1a4b8d11b83d190b2a08e521c804f76c3e77" gracePeriod=600 Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.425006 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="68d813d9b094f770a392ffde3e0c1a4b8d11b83d190b2a08e521c804f76c3e77" exitCode=0 Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.425128 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"68d813d9b094f770a392ffde3e0c1a4b8d11b83d190b2a08e521c804f76c3e77"} Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.425714 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472"} Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.425759 4772 scope.go:117] "RemoveContainer" containerID="409b499b44a59e26c78fa7bc525e1be7b00da254f59f8cd9f76c1032d1e4742d" Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.714128 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.745251 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-config-volume\") pod \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.745416 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-secret-volume\") pod \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.745469 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcbql\" (UniqueName: \"kubernetes.io/projected/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-kube-api-access-vcbql\") pod \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\" (UID: \"13937ca4-2579-4c88-bfac-8cf50aeb2ffe\") " Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.746812 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-config-volume" (OuterVolumeSpecName: "config-volume") pod "13937ca4-2579-4c88-bfac-8cf50aeb2ffe" (UID: "13937ca4-2579-4c88-bfac-8cf50aeb2ffe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.752018 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "13937ca4-2579-4c88-bfac-8cf50aeb2ffe" (UID: "13937ca4-2579-4c88-bfac-8cf50aeb2ffe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.755866 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-kube-api-access-vcbql" (OuterVolumeSpecName: "kube-api-access-vcbql") pod "13937ca4-2579-4c88-bfac-8cf50aeb2ffe" (UID: "13937ca4-2579-4c88-bfac-8cf50aeb2ffe"). InnerVolumeSpecName "kube-api-access-vcbql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.847366 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.847403 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:00:02 crc kubenswrapper[4772]: I1122 12:00:02.847415 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcbql\" (UniqueName: \"kubernetes.io/projected/13937ca4-2579-4c88-bfac-8cf50aeb2ffe-kube-api-access-vcbql\") on node \"crc\" DevicePath \"\"" Nov 22 12:00:03 crc kubenswrapper[4772]: I1122 12:00:03.440316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" event={"ID":"13937ca4-2579-4c88-bfac-8cf50aeb2ffe","Type":"ContainerDied","Data":"e99c651c96212b2ffaf489a215f1459767cdf8e1f68f5cac2f49af37a4b3324e"} Nov 22 12:00:03 crc kubenswrapper[4772]: I1122 12:00:03.440730 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99c651c96212b2ffaf489a215f1459767cdf8e1f68f5cac2f49af37a4b3324e" Nov 22 12:00:03 crc kubenswrapper[4772]: I1122 12:00:03.440511 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b" Nov 22 12:00:03 crc kubenswrapper[4772]: I1122 12:00:03.803789 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47"] Nov 22 12:00:03 crc kubenswrapper[4772]: I1122 12:00:03.809536 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396835-n9b47"] Nov 22 12:00:05 crc kubenswrapper[4772]: I1122 12:00:05.425983 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac351ae8-8ede-44d3-9390-1ebadf4b42f7" path="/var/lib/kubelet/pods/ac351ae8-8ede-44d3-9390-1ebadf4b42f7/volumes" Nov 22 12:00:10 crc kubenswrapper[4772]: I1122 12:00:10.780088 4772 scope.go:117] "RemoveContainer" containerID="b48d820e88511b4cd4f0f93f619a2497fcda6e5e1eca24fe089a20cd441c80f8" Nov 22 12:02:01 crc kubenswrapper[4772]: I1122 12:02:01.533783 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:02:01 crc kubenswrapper[4772]: I1122 12:02:01.534838 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:02:31 crc kubenswrapper[4772]: I1122 12:02:31.533958 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:02:31 crc kubenswrapper[4772]: I1122 12:02:31.534928 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.228230 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l8tqm"] Nov 22 12:02:40 crc kubenswrapper[4772]: E1122 12:02:40.229325 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13937ca4-2579-4c88-bfac-8cf50aeb2ffe" containerName="collect-profiles" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.229347 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="13937ca4-2579-4c88-bfac-8cf50aeb2ffe" containerName="collect-profiles" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.229597 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="13937ca4-2579-4c88-bfac-8cf50aeb2ffe" containerName="collect-profiles" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.232700 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.246584 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l8tqm"] Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.348876 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbgvp\" (UniqueName: \"kubernetes.io/projected/d874a95e-a38a-4d16-a0f1-d5a30de2e436-kube-api-access-kbgvp\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.349008 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-catalog-content\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.349143 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-utilities\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.450012 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbgvp\" (UniqueName: \"kubernetes.io/projected/d874a95e-a38a-4d16-a0f1-d5a30de2e436-kube-api-access-kbgvp\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.450109 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-catalog-content\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.450160 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-utilities\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.450702 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-utilities\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.450949 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-catalog-content\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.475076 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbgvp\" (UniqueName: \"kubernetes.io/projected/d874a95e-a38a-4d16-a0f1-d5a30de2e436-kube-api-access-kbgvp\") pod \"certified-operators-l8tqm\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:40 crc kubenswrapper[4772]: I1122 12:02:40.567381 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:41 crc kubenswrapper[4772]: I1122 12:02:41.049649 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l8tqm"] Nov 22 12:02:41 crc kubenswrapper[4772]: I1122 12:02:41.086035 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8tqm" event={"ID":"d874a95e-a38a-4d16-a0f1-d5a30de2e436","Type":"ContainerStarted","Data":"9bfbc448e64dc3221a9223f3489c684b616c3d5503fe9d4718599fe26236e08b"} Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.100012 4772 generic.go:334] "Generic (PLEG): container finished" podID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerID="78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9" exitCode=0 Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.100135 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8tqm" event={"ID":"d874a95e-a38a-4d16-a0f1-d5a30de2e436","Type":"ContainerDied","Data":"78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9"} Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.103238 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.634968 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxsbt"] Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.640841 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.662846 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxsbt"] Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.789562 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-utilities\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.789740 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8962\" (UniqueName: \"kubernetes.io/projected/c686beda-4866-45dc-8e14-f2fc825daa1e-kube-api-access-c8962\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.789806 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-catalog-content\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.892005 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8962\" (UniqueName: \"kubernetes.io/projected/c686beda-4866-45dc-8e14-f2fc825daa1e-kube-api-access-c8962\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.892130 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-catalog-content\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.892211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-utilities\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.892831 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-utilities\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.893008 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-catalog-content\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.916236 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8962\" (UniqueName: \"kubernetes.io/projected/c686beda-4866-45dc-8e14-f2fc825daa1e-kube-api-access-c8962\") pod \"community-operators-pxsbt\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:42 crc kubenswrapper[4772]: I1122 12:02:42.997490 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:43 crc kubenswrapper[4772]: I1122 12:02:43.346379 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxsbt"] Nov 22 12:02:44 crc kubenswrapper[4772]: I1122 12:02:44.120886 4772 generic.go:334] "Generic (PLEG): container finished" podID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerID="1c1bc1ad1131f888a9c9e5dd9b342803950c3de8fecc49420c04a0758d15f687" exitCode=0 Nov 22 12:02:44 crc kubenswrapper[4772]: I1122 12:02:44.121252 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxsbt" event={"ID":"c686beda-4866-45dc-8e14-f2fc825daa1e","Type":"ContainerDied","Data":"1c1bc1ad1131f888a9c9e5dd9b342803950c3de8fecc49420c04a0758d15f687"} Nov 22 12:02:44 crc kubenswrapper[4772]: I1122 12:02:44.121290 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxsbt" event={"ID":"c686beda-4866-45dc-8e14-f2fc825daa1e","Type":"ContainerStarted","Data":"f2adc552d5c3f02f43bce4f365092baec66df27c81232248daf859247981adaa"} Nov 22 12:02:44 crc kubenswrapper[4772]: I1122 12:02:44.125909 4772 generic.go:334] "Generic (PLEG): container finished" podID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerID="c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d" exitCode=0 Nov 22 12:02:44 crc kubenswrapper[4772]: I1122 12:02:44.125952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8tqm" event={"ID":"d874a95e-a38a-4d16-a0f1-d5a30de2e436","Type":"ContainerDied","Data":"c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d"} Nov 22 12:02:45 crc kubenswrapper[4772]: I1122 12:02:45.134722 4772 generic.go:334] "Generic (PLEG): container finished" podID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerID="6164057f501470d044536b06289f46f18685d90d5c296697fe9647887a13ac40" exitCode=0 Nov 22 12:02:45 crc kubenswrapper[4772]: I1122 12:02:45.134789 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxsbt" event={"ID":"c686beda-4866-45dc-8e14-f2fc825daa1e","Type":"ContainerDied","Data":"6164057f501470d044536b06289f46f18685d90d5c296697fe9647887a13ac40"} Nov 22 12:02:45 crc kubenswrapper[4772]: I1122 12:02:45.139824 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8tqm" event={"ID":"d874a95e-a38a-4d16-a0f1-d5a30de2e436","Type":"ContainerStarted","Data":"2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640"} Nov 22 12:02:45 crc kubenswrapper[4772]: I1122 12:02:45.179249 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l8tqm" podStartSLOduration=2.69794388 podStartE2EDuration="5.179200354s" podCreationTimestamp="2025-11-22 12:02:40 +0000 UTC" firstStartedPulling="2025-11-22 12:02:42.102753009 +0000 UTC m=+5082.342197533" lastFinishedPulling="2025-11-22 12:02:44.584009503 +0000 UTC m=+5084.823454007" observedRunningTime="2025-11-22 12:02:45.176870187 +0000 UTC m=+5085.416314681" watchObservedRunningTime="2025-11-22 12:02:45.179200354 +0000 UTC m=+5085.418644848" Nov 22 12:02:46 crc kubenswrapper[4772]: I1122 12:02:46.152865 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxsbt" event={"ID":"c686beda-4866-45dc-8e14-f2fc825daa1e","Type":"ContainerStarted","Data":"4399d3a7fa52bb6d57e98ffc49c2b3cac4a1373b2076a659bad2889cc160a6a2"} Nov 22 12:02:46 crc kubenswrapper[4772]: I1122 12:02:46.184366 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxsbt" podStartSLOduration=2.809475951 podStartE2EDuration="4.184345786s" podCreationTimestamp="2025-11-22 12:02:42 +0000 UTC" firstStartedPulling="2025-11-22 12:02:44.123067619 +0000 UTC m=+5084.362512113" lastFinishedPulling="2025-11-22 12:02:45.497937454 +0000 UTC m=+5085.737381948" observedRunningTime="2025-11-22 12:02:46.182297976 +0000 UTC m=+5086.421742480" watchObservedRunningTime="2025-11-22 12:02:46.184345786 +0000 UTC m=+5086.423790270" Nov 22 12:02:50 crc kubenswrapper[4772]: I1122 12:02:50.567556 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:50 crc kubenswrapper[4772]: I1122 12:02:50.567665 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:50 crc kubenswrapper[4772]: I1122 12:02:50.619166 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:51 crc kubenswrapper[4772]: I1122 12:02:51.270192 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:51 crc kubenswrapper[4772]: I1122 12:02:51.336751 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l8tqm"] Nov 22 12:02:52 crc kubenswrapper[4772]: I1122 12:02:52.998331 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.000038 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.064503 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.228834 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l8tqm" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="registry-server" containerID="cri-o://2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640" gracePeriod=2 Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.303665 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.767561 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.943636 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-utilities\") pod \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.943746 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbgvp\" (UniqueName: \"kubernetes.io/projected/d874a95e-a38a-4d16-a0f1-d5a30de2e436-kube-api-access-kbgvp\") pod \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.943861 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-catalog-content\") pod \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\" (UID: \"d874a95e-a38a-4d16-a0f1-d5a30de2e436\") " Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.945126 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-utilities" (OuterVolumeSpecName: "utilities") pod "d874a95e-a38a-4d16-a0f1-d5a30de2e436" (UID: "d874a95e-a38a-4d16-a0f1-d5a30de2e436"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:02:53 crc kubenswrapper[4772]: I1122 12:02:53.952944 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d874a95e-a38a-4d16-a0f1-d5a30de2e436-kube-api-access-kbgvp" (OuterVolumeSpecName: "kube-api-access-kbgvp") pod "d874a95e-a38a-4d16-a0f1-d5a30de2e436" (UID: "d874a95e-a38a-4d16-a0f1-d5a30de2e436"). InnerVolumeSpecName "kube-api-access-kbgvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.045173 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbgvp\" (UniqueName: \"kubernetes.io/projected/d874a95e-a38a-4d16-a0f1-d5a30de2e436-kube-api-access-kbgvp\") on node \"crc\" DevicePath \"\"" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.045213 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.239860 4772 generic.go:334] "Generic (PLEG): container finished" podID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerID="2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640" exitCode=0 Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.239971 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8tqm" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.239989 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8tqm" event={"ID":"d874a95e-a38a-4d16-a0f1-d5a30de2e436","Type":"ContainerDied","Data":"2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640"} Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.240061 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8tqm" event={"ID":"d874a95e-a38a-4d16-a0f1-d5a30de2e436","Type":"ContainerDied","Data":"9bfbc448e64dc3221a9223f3489c684b616c3d5503fe9d4718599fe26236e08b"} Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.240075 4772 scope.go:117] "RemoveContainer" containerID="2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.271548 4772 scope.go:117] "RemoveContainer" containerID="c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.311952 4772 scope.go:117] "RemoveContainer" containerID="78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.344440 4772 scope.go:117] "RemoveContainer" containerID="2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640" Nov 22 12:02:54 crc kubenswrapper[4772]: E1122 12:02:54.345342 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640\": container with ID starting with 2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640 not found: ID does not exist" containerID="2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.345416 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640"} err="failed to get container status \"2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640\": rpc error: code = NotFound desc = could not find container \"2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640\": container with ID starting with 2dca7cd1bf83602b8523cd3a43150ae49b95cbd2285f8ef2b8c9739730f56640 not found: ID does not exist" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.345463 4772 scope.go:117] "RemoveContainer" containerID="c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d" Nov 22 12:02:54 crc kubenswrapper[4772]: E1122 12:02:54.346164 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d\": container with ID starting with c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d not found: ID does not exist" containerID="c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.346262 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d"} err="failed to get container status \"c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d\": rpc error: code = NotFound desc = could not find container \"c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d\": container with ID starting with c2f883445ed4e8610544fdacc81397b6c998f8dd058094532953baeab39e751d not found: ID does not exist" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.346319 4772 scope.go:117] "RemoveContainer" containerID="78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9" Nov 22 12:02:54 crc kubenswrapper[4772]: E1122 12:02:54.346786 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9\": container with ID starting with 78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9 not found: ID does not exist" containerID="78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.346845 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9"} err="failed to get container status \"78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9\": rpc error: code = NotFound desc = could not find container \"78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9\": container with ID starting with 78020730db27de2f79b4a760b9a73930e186ef71fbb4c58d0a5d5e1814646ad9 not found: ID does not exist" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.567266 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d874a95e-a38a-4d16-a0f1-d5a30de2e436" (UID: "d874a95e-a38a-4d16-a0f1-d5a30de2e436"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.655689 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d874a95e-a38a-4d16-a0f1-d5a30de2e436-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.655790 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxsbt"] Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.880565 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l8tqm"] Nov 22 12:02:54 crc kubenswrapper[4772]: I1122 12:02:54.887023 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l8tqm"] Nov 22 12:02:55 crc kubenswrapper[4772]: I1122 12:02:55.252270 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pxsbt" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="registry-server" containerID="cri-o://4399d3a7fa52bb6d57e98ffc49c2b3cac4a1373b2076a659bad2889cc160a6a2" gracePeriod=2 Nov 22 12:02:55 crc kubenswrapper[4772]: I1122 12:02:55.431091 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" path="/var/lib/kubelet/pods/d874a95e-a38a-4d16-a0f1-d5a30de2e436/volumes" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.270424 4772 generic.go:334] "Generic (PLEG): container finished" podID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerID="4399d3a7fa52bb6d57e98ffc49c2b3cac4a1373b2076a659bad2889cc160a6a2" exitCode=0 Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.270564 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxsbt" event={"ID":"c686beda-4866-45dc-8e14-f2fc825daa1e","Type":"ContainerDied","Data":"4399d3a7fa52bb6d57e98ffc49c2b3cac4a1373b2076a659bad2889cc160a6a2"} Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.270939 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxsbt" event={"ID":"c686beda-4866-45dc-8e14-f2fc825daa1e","Type":"ContainerDied","Data":"f2adc552d5c3f02f43bce4f365092baec66df27c81232248daf859247981adaa"} Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.270970 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2adc552d5c3f02f43bce4f365092baec66df27c81232248daf859247981adaa" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.344588 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.486754 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-catalog-content\") pod \"c686beda-4866-45dc-8e14-f2fc825daa1e\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.486877 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-utilities\") pod \"c686beda-4866-45dc-8e14-f2fc825daa1e\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.486927 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8962\" (UniqueName: \"kubernetes.io/projected/c686beda-4866-45dc-8e14-f2fc825daa1e-kube-api-access-c8962\") pod \"c686beda-4866-45dc-8e14-f2fc825daa1e\" (UID: \"c686beda-4866-45dc-8e14-f2fc825daa1e\") " Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.488748 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-utilities" (OuterVolumeSpecName: "utilities") pod "c686beda-4866-45dc-8e14-f2fc825daa1e" (UID: "c686beda-4866-45dc-8e14-f2fc825daa1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.539127 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c686beda-4866-45dc-8e14-f2fc825daa1e-kube-api-access-c8962" (OuterVolumeSpecName: "kube-api-access-c8962") pod "c686beda-4866-45dc-8e14-f2fc825daa1e" (UID: "c686beda-4866-45dc-8e14-f2fc825daa1e"). InnerVolumeSpecName "kube-api-access-c8962". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.542721 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c686beda-4866-45dc-8e14-f2fc825daa1e" (UID: "c686beda-4866-45dc-8e14-f2fc825daa1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.590531 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.590602 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c686beda-4866-45dc-8e14-f2fc825daa1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:02:56 crc kubenswrapper[4772]: I1122 12:02:56.590625 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8962\" (UniqueName: \"kubernetes.io/projected/c686beda-4866-45dc-8e14-f2fc825daa1e-kube-api-access-c8962\") on node \"crc\" DevicePath \"\"" Nov 22 12:02:57 crc kubenswrapper[4772]: I1122 12:02:57.281026 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxsbt" Nov 22 12:02:57 crc kubenswrapper[4772]: I1122 12:02:57.323860 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxsbt"] Nov 22 12:02:57 crc kubenswrapper[4772]: I1122 12:02:57.330617 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pxsbt"] Nov 22 12:02:57 crc kubenswrapper[4772]: I1122 12:02:57.429137 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" path="/var/lib/kubelet/pods/c686beda-4866-45dc-8e14-f2fc825daa1e/volumes" Nov 22 12:03:01 crc kubenswrapper[4772]: I1122 12:03:01.533214 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:03:01 crc kubenswrapper[4772]: I1122 12:03:01.534487 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:03:01 crc kubenswrapper[4772]: I1122 12:03:01.534580 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:03:01 crc kubenswrapper[4772]: I1122 12:03:01.535946 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:03:01 crc kubenswrapper[4772]: I1122 12:03:01.536080 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" gracePeriod=600 Nov 22 12:03:01 crc kubenswrapper[4772]: E1122 12:03:01.687608 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.337482 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" exitCode=0 Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.337578 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472"} Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.337898 4772 scope.go:117] "RemoveContainer" containerID="68d813d9b094f770a392ffde3e0c1a4b8d11b83d190b2a08e521c804f76c3e77" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.338615 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:03:02 crc kubenswrapper[4772]: E1122 12:03:02.338865 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.427728 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tckdt"] Nov 22 12:03:02 crc kubenswrapper[4772]: E1122 12:03:02.428145 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="registry-server" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428165 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="registry-server" Nov 22 12:03:02 crc kubenswrapper[4772]: E1122 12:03:02.428183 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="extract-utilities" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428191 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="extract-utilities" Nov 22 12:03:02 crc kubenswrapper[4772]: E1122 12:03:02.428207 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="registry-server" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428214 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="registry-server" Nov 22 12:03:02 crc kubenswrapper[4772]: E1122 12:03:02.428242 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="extract-content" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428251 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="extract-content" Nov 22 12:03:02 crc kubenswrapper[4772]: E1122 12:03:02.428264 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="extract-utilities" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428272 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="extract-utilities" Nov 22 12:03:02 crc kubenswrapper[4772]: E1122 12:03:02.428285 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="extract-content" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428294 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="extract-content" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428477 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c686beda-4866-45dc-8e14-f2fc825daa1e" containerName="registry-server" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.428499 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d874a95e-a38a-4d16-a0f1-d5a30de2e436" containerName="registry-server" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.429872 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.452940 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tckdt"] Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.597282 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-catalog-content\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.597360 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2gdm\" (UniqueName: \"kubernetes.io/projected/97dca6e1-c78a-46db-addb-03a2ad6721ca-kube-api-access-m2gdm\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.599476 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-utilities\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.714267 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-catalog-content\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.714329 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2gdm\" (UniqueName: \"kubernetes.io/projected/97dca6e1-c78a-46db-addb-03a2ad6721ca-kube-api-access-m2gdm\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.714356 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-utilities\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.714901 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-utilities\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.714907 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-catalog-content\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.741556 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2gdm\" (UniqueName: \"kubernetes.io/projected/97dca6e1-c78a-46db-addb-03a2ad6721ca-kube-api-access-m2gdm\") pod \"redhat-marketplace-tckdt\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:02 crc kubenswrapper[4772]: I1122 12:03:02.794497 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:03 crc kubenswrapper[4772]: I1122 12:03:03.264163 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tckdt"] Nov 22 12:03:03 crc kubenswrapper[4772]: I1122 12:03:03.347545 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tckdt" event={"ID":"97dca6e1-c78a-46db-addb-03a2ad6721ca","Type":"ContainerStarted","Data":"7a58197efb7e8b514d860750d86909e1cbce92f89b8e7cb26e4b2d97764f3569"} Nov 22 12:03:04 crc kubenswrapper[4772]: I1122 12:03:04.359385 4772 generic.go:334] "Generic (PLEG): container finished" podID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerID="7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83" exitCode=0 Nov 22 12:03:04 crc kubenswrapper[4772]: I1122 12:03:04.359434 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tckdt" event={"ID":"97dca6e1-c78a-46db-addb-03a2ad6721ca","Type":"ContainerDied","Data":"7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83"} Nov 22 12:03:05 crc kubenswrapper[4772]: I1122 12:03:05.377919 4772 generic.go:334] "Generic (PLEG): container finished" podID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerID="033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212" exitCode=0 Nov 22 12:03:05 crc kubenswrapper[4772]: I1122 12:03:05.381733 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tckdt" event={"ID":"97dca6e1-c78a-46db-addb-03a2ad6721ca","Type":"ContainerDied","Data":"033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212"} Nov 22 12:03:06 crc kubenswrapper[4772]: I1122 12:03:06.393699 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tckdt" event={"ID":"97dca6e1-c78a-46db-addb-03a2ad6721ca","Type":"ContainerStarted","Data":"f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6"} Nov 22 12:03:06 crc kubenswrapper[4772]: I1122 12:03:06.420810 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tckdt" podStartSLOduration=2.986026142 podStartE2EDuration="4.420784737s" podCreationTimestamp="2025-11-22 12:03:02 +0000 UTC" firstStartedPulling="2025-11-22 12:03:04.36232901 +0000 UTC m=+5104.601773514" lastFinishedPulling="2025-11-22 12:03:05.797087585 +0000 UTC m=+5106.036532109" observedRunningTime="2025-11-22 12:03:06.416530143 +0000 UTC m=+5106.655974647" watchObservedRunningTime="2025-11-22 12:03:06.420784737 +0000 UTC m=+5106.660229231" Nov 22 12:03:12 crc kubenswrapper[4772]: I1122 12:03:12.795346 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:12 crc kubenswrapper[4772]: I1122 12:03:12.796123 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:12 crc kubenswrapper[4772]: I1122 12:03:12.902367 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:13 crc kubenswrapper[4772]: I1122 12:03:13.421480 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:03:13 crc kubenswrapper[4772]: E1122 12:03:13.421972 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:03:13 crc kubenswrapper[4772]: I1122 12:03:13.546036 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:13 crc kubenswrapper[4772]: I1122 12:03:13.837022 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tckdt"] Nov 22 12:03:15 crc kubenswrapper[4772]: I1122 12:03:15.488537 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tckdt" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="registry-server" containerID="cri-o://f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6" gracePeriod=2 Nov 22 12:03:15 crc kubenswrapper[4772]: I1122 12:03:15.985270 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.072690 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-catalog-content\") pod \"97dca6e1-c78a-46db-addb-03a2ad6721ca\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.072853 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2gdm\" (UniqueName: \"kubernetes.io/projected/97dca6e1-c78a-46db-addb-03a2ad6721ca-kube-api-access-m2gdm\") pod \"97dca6e1-c78a-46db-addb-03a2ad6721ca\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.073023 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-utilities\") pod \"97dca6e1-c78a-46db-addb-03a2ad6721ca\" (UID: \"97dca6e1-c78a-46db-addb-03a2ad6721ca\") " Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.074139 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-utilities" (OuterVolumeSpecName: "utilities") pod "97dca6e1-c78a-46db-addb-03a2ad6721ca" (UID: "97dca6e1-c78a-46db-addb-03a2ad6721ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.080308 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97dca6e1-c78a-46db-addb-03a2ad6721ca-kube-api-access-m2gdm" (OuterVolumeSpecName: "kube-api-access-m2gdm") pod "97dca6e1-c78a-46db-addb-03a2ad6721ca" (UID: "97dca6e1-c78a-46db-addb-03a2ad6721ca"). InnerVolumeSpecName "kube-api-access-m2gdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.092229 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97dca6e1-c78a-46db-addb-03a2ad6721ca" (UID: "97dca6e1-c78a-46db-addb-03a2ad6721ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.175993 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.176081 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97dca6e1-c78a-46db-addb-03a2ad6721ca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.176109 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2gdm\" (UniqueName: \"kubernetes.io/projected/97dca6e1-c78a-46db-addb-03a2ad6721ca-kube-api-access-m2gdm\") on node \"crc\" DevicePath \"\"" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.497392 4772 generic.go:334] "Generic (PLEG): container finished" podID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerID="f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6" exitCode=0 Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.497453 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tckdt" event={"ID":"97dca6e1-c78a-46db-addb-03a2ad6721ca","Type":"ContainerDied","Data":"f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6"} Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.497467 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tckdt" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.497495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tckdt" event={"ID":"97dca6e1-c78a-46db-addb-03a2ad6721ca","Type":"ContainerDied","Data":"7a58197efb7e8b514d860750d86909e1cbce92f89b8e7cb26e4b2d97764f3569"} Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.497518 4772 scope.go:117] "RemoveContainer" containerID="f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.528283 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tckdt"] Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.529699 4772 scope.go:117] "RemoveContainer" containerID="033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.541075 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tckdt"] Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.561749 4772 scope.go:117] "RemoveContainer" containerID="7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.582871 4772 scope.go:117] "RemoveContainer" containerID="f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6" Nov 22 12:03:16 crc kubenswrapper[4772]: E1122 12:03:16.583536 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6\": container with ID starting with f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6 not found: ID does not exist" containerID="f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.583593 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6"} err="failed to get container status \"f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6\": rpc error: code = NotFound desc = could not find container \"f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6\": container with ID starting with f7ce7400bad65451051634bb06dcc28d4bb4de1aff53413ecc3fbae881db66e6 not found: ID does not exist" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.583632 4772 scope.go:117] "RemoveContainer" containerID="033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212" Nov 22 12:03:16 crc kubenswrapper[4772]: E1122 12:03:16.584033 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212\": container with ID starting with 033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212 not found: ID does not exist" containerID="033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.584095 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212"} err="failed to get container status \"033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212\": rpc error: code = NotFound desc = could not find container \"033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212\": container with ID starting with 033195c27290ea069e418462b06789e08711708dbe118b27b6e79173c70df212 not found: ID does not exist" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.584135 4772 scope.go:117] "RemoveContainer" containerID="7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83" Nov 22 12:03:16 crc kubenswrapper[4772]: E1122 12:03:16.584470 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83\": container with ID starting with 7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83 not found: ID does not exist" containerID="7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83" Nov 22 12:03:16 crc kubenswrapper[4772]: I1122 12:03:16.584520 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83"} err="failed to get container status \"7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83\": rpc error: code = NotFound desc = could not find container \"7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83\": container with ID starting with 7e505cd14d11b3beedffaebbdceb1e8ea87612816415761c236ed3dfa9e36d83 not found: ID does not exist" Nov 22 12:03:17 crc kubenswrapper[4772]: I1122 12:03:17.430216 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" path="/var/lib/kubelet/pods/97dca6e1-c78a-46db-addb-03a2ad6721ca/volumes" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.692306 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 12:03:18 crc kubenswrapper[4772]: E1122 12:03:18.692934 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="extract-utilities" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.692960 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="extract-utilities" Nov 22 12:03:18 crc kubenswrapper[4772]: E1122 12:03:18.693001 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="extract-content" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.693013 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="extract-content" Nov 22 12:03:18 crc kubenswrapper[4772]: E1122 12:03:18.693084 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="registry-server" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.693099 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="registry-server" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.693385 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="97dca6e1-c78a-46db-addb-03a2ad6721ca" containerName="registry-server" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.694732 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.698945 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8x6hp" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.701806 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.802982 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\") pod \"mariadb-copy-data\" (UID: \"79ada2a2-307a-4c67-bd8d-c2e3b351e127\") " pod="openstack/mariadb-copy-data" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.803061 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmw9\" (UniqueName: \"kubernetes.io/projected/79ada2a2-307a-4c67-bd8d-c2e3b351e127-kube-api-access-dgmw9\") pod \"mariadb-copy-data\" (UID: \"79ada2a2-307a-4c67-bd8d-c2e3b351e127\") " pod="openstack/mariadb-copy-data" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.904920 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\") pod \"mariadb-copy-data\" (UID: \"79ada2a2-307a-4c67-bd8d-c2e3b351e127\") " pod="openstack/mariadb-copy-data" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.904985 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgmw9\" (UniqueName: \"kubernetes.io/projected/79ada2a2-307a-4c67-bd8d-c2e3b351e127-kube-api-access-dgmw9\") pod \"mariadb-copy-data\" (UID: \"79ada2a2-307a-4c67-bd8d-c2e3b351e127\") " pod="openstack/mariadb-copy-data" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.909535 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.909581 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\") pod \"mariadb-copy-data\" (UID: \"79ada2a2-307a-4c67-bd8d-c2e3b351e127\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ab5a872ed146e18d36fda210da0a21332c740ca2610aa6dbd2415188048b67ff/globalmount\"" pod="openstack/mariadb-copy-data" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.932984 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgmw9\" (UniqueName: \"kubernetes.io/projected/79ada2a2-307a-4c67-bd8d-c2e3b351e127-kube-api-access-dgmw9\") pod \"mariadb-copy-data\" (UID: \"79ada2a2-307a-4c67-bd8d-c2e3b351e127\") " pod="openstack/mariadb-copy-data" Nov 22 12:03:18 crc kubenswrapper[4772]: I1122 12:03:18.961162 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b893d131-1af1-4847-9f53-804ff92fd6c3\") pod \"mariadb-copy-data\" (UID: \"79ada2a2-307a-4c67-bd8d-c2e3b351e127\") " pod="openstack/mariadb-copy-data" Nov 22 12:03:19 crc kubenswrapper[4772]: I1122 12:03:19.034947 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 12:03:19 crc kubenswrapper[4772]: I1122 12:03:19.410640 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 12:03:19 crc kubenswrapper[4772]: I1122 12:03:19.544509 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"79ada2a2-307a-4c67-bd8d-c2e3b351e127","Type":"ContainerStarted","Data":"3cb3cd13515f7dff7e2c4616e5351a9fb5aca84beffc74e34b26979feff0a720"} Nov 22 12:03:20 crc kubenswrapper[4772]: I1122 12:03:20.562121 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"79ada2a2-307a-4c67-bd8d-c2e3b351e127","Type":"ContainerStarted","Data":"debde157b34e2ff919d0269643d8bbfebe6b473bf2d90198e5e768f2ce7110c5"} Nov 22 12:03:20 crc kubenswrapper[4772]: I1122 12:03:20.590152 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.590118176 podStartE2EDuration="3.590118176s" podCreationTimestamp="2025-11-22 12:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:03:20.588557328 +0000 UTC m=+5120.828001862" watchObservedRunningTime="2025-11-22 12:03:20.590118176 +0000 UTC m=+5120.829562690" Nov 22 12:03:23 crc kubenswrapper[4772]: I1122 12:03:23.462795 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:23 crc kubenswrapper[4772]: I1122 12:03:23.466162 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:23 crc kubenswrapper[4772]: I1122 12:03:23.472727 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:23 crc kubenswrapper[4772]: I1122 12:03:23.604761 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn27\" (UniqueName: \"kubernetes.io/projected/37202c82-5492-4e2e-a92f-95e91d37de53-kube-api-access-6vn27\") pod \"mariadb-client\" (UID: \"37202c82-5492-4e2e-a92f-95e91d37de53\") " pod="openstack/mariadb-client" Nov 22 12:03:23 crc kubenswrapper[4772]: I1122 12:03:23.706861 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vn27\" (UniqueName: \"kubernetes.io/projected/37202c82-5492-4e2e-a92f-95e91d37de53-kube-api-access-6vn27\") pod \"mariadb-client\" (UID: \"37202c82-5492-4e2e-a92f-95e91d37de53\") " pod="openstack/mariadb-client" Nov 22 12:03:23 crc kubenswrapper[4772]: I1122 12:03:23.740611 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vn27\" (UniqueName: \"kubernetes.io/projected/37202c82-5492-4e2e-a92f-95e91d37de53-kube-api-access-6vn27\") pod \"mariadb-client\" (UID: \"37202c82-5492-4e2e-a92f-95e91d37de53\") " pod="openstack/mariadb-client" Nov 22 12:03:23 crc kubenswrapper[4772]: I1122 12:03:23.796250 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:24 crc kubenswrapper[4772]: I1122 12:03:24.184383 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:24 crc kubenswrapper[4772]: W1122 12:03:24.186473 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37202c82_5492_4e2e_a92f_95e91d37de53.slice/crio-63f069943987f02e71f34ae51267e2122da5f35bf06bf98a64d53c7d06fe0353 WatchSource:0}: Error finding container 63f069943987f02e71f34ae51267e2122da5f35bf06bf98a64d53c7d06fe0353: Status 404 returned error can't find the container with id 63f069943987f02e71f34ae51267e2122da5f35bf06bf98a64d53c7d06fe0353 Nov 22 12:03:24 crc kubenswrapper[4772]: I1122 12:03:24.606139 4772 generic.go:334] "Generic (PLEG): container finished" podID="37202c82-5492-4e2e-a92f-95e91d37de53" containerID="e504896c55d2bcff4765072365159e6f580a1676f747eb9ba40b74b46a50f898" exitCode=0 Nov 22 12:03:24 crc kubenswrapper[4772]: I1122 12:03:24.606339 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"37202c82-5492-4e2e-a92f-95e91d37de53","Type":"ContainerDied","Data":"e504896c55d2bcff4765072365159e6f580a1676f747eb9ba40b74b46a50f898"} Nov 22 12:03:24 crc kubenswrapper[4772]: I1122 12:03:24.606825 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"37202c82-5492-4e2e-a92f-95e91d37de53","Type":"ContainerStarted","Data":"63f069943987f02e71f34ae51267e2122da5f35bf06bf98a64d53c7d06fe0353"} Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.044409 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.072530 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_37202c82-5492-4e2e-a92f-95e91d37de53/mariadb-client/0.log" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.100905 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.106129 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.165630 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vn27\" (UniqueName: \"kubernetes.io/projected/37202c82-5492-4e2e-a92f-95e91d37de53-kube-api-access-6vn27\") pod \"37202c82-5492-4e2e-a92f-95e91d37de53\" (UID: \"37202c82-5492-4e2e-a92f-95e91d37de53\") " Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.177650 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37202c82-5492-4e2e-a92f-95e91d37de53-kube-api-access-6vn27" (OuterVolumeSpecName: "kube-api-access-6vn27") pod "37202c82-5492-4e2e-a92f-95e91d37de53" (UID: "37202c82-5492-4e2e-a92f-95e91d37de53"). InnerVolumeSpecName "kube-api-access-6vn27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.254400 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:26 crc kubenswrapper[4772]: E1122 12:03:26.255193 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37202c82-5492-4e2e-a92f-95e91d37de53" containerName="mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.255242 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="37202c82-5492-4e2e-a92f-95e91d37de53" containerName="mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.255660 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="37202c82-5492-4e2e-a92f-95e91d37de53" containerName="mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.256760 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.263345 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.268584 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vn27\" (UniqueName: \"kubernetes.io/projected/37202c82-5492-4e2e-a92f-95e91d37de53-kube-api-access-6vn27\") on node \"crc\" DevicePath \"\"" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.372915 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm6lx\" (UniqueName: \"kubernetes.io/projected/ecf99794-3d40-40ee-be52-3dde0c03e8fc-kube-api-access-qm6lx\") pod \"mariadb-client\" (UID: \"ecf99794-3d40-40ee-be52-3dde0c03e8fc\") " pod="openstack/mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.475801 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm6lx\" (UniqueName: \"kubernetes.io/projected/ecf99794-3d40-40ee-be52-3dde0c03e8fc-kube-api-access-qm6lx\") pod \"mariadb-client\" (UID: \"ecf99794-3d40-40ee-be52-3dde0c03e8fc\") " pod="openstack/mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.499844 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm6lx\" (UniqueName: \"kubernetes.io/projected/ecf99794-3d40-40ee-be52-3dde0c03e8fc-kube-api-access-qm6lx\") pod \"mariadb-client\" (UID: \"ecf99794-3d40-40ee-be52-3dde0c03e8fc\") " pod="openstack/mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.601460 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.634861 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f069943987f02e71f34ae51267e2122da5f35bf06bf98a64d53c7d06fe0353" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.634985 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:26 crc kubenswrapper[4772]: I1122 12:03:26.684674 4772 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="37202c82-5492-4e2e-a92f-95e91d37de53" podUID="ecf99794-3d40-40ee-be52-3dde0c03e8fc" Nov 22 12:03:27 crc kubenswrapper[4772]: I1122 12:03:27.111009 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:27 crc kubenswrapper[4772]: I1122 12:03:27.414630 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:03:27 crc kubenswrapper[4772]: E1122 12:03:27.415489 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:03:27 crc kubenswrapper[4772]: I1122 12:03:27.429865 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37202c82-5492-4e2e-a92f-95e91d37de53" path="/var/lib/kubelet/pods/37202c82-5492-4e2e-a92f-95e91d37de53/volumes" Nov 22 12:03:27 crc kubenswrapper[4772]: I1122 12:03:27.646846 4772 generic.go:334] "Generic (PLEG): container finished" podID="ecf99794-3d40-40ee-be52-3dde0c03e8fc" containerID="c562022b653e47d2901944d7d6ae1add4a28d57c8e193423f6ead2195a995896" exitCode=0 Nov 22 12:03:27 crc kubenswrapper[4772]: I1122 12:03:27.646892 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ecf99794-3d40-40ee-be52-3dde0c03e8fc","Type":"ContainerDied","Data":"c562022b653e47d2901944d7d6ae1add4a28d57c8e193423f6ead2195a995896"} Nov 22 12:03:27 crc kubenswrapper[4772]: I1122 12:03:27.646933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ecf99794-3d40-40ee-be52-3dde0c03e8fc","Type":"ContainerStarted","Data":"666d9b979a0b9317461da5fe68b6839ee129416ecb2a05d75b4f34b2c5e0d753"} Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.121579 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.149357 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_ecf99794-3d40-40ee-be52-3dde0c03e8fc/mariadb-client/0.log" Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.177680 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.185357 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.227083 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm6lx\" (UniqueName: \"kubernetes.io/projected/ecf99794-3d40-40ee-be52-3dde0c03e8fc-kube-api-access-qm6lx\") pod \"ecf99794-3d40-40ee-be52-3dde0c03e8fc\" (UID: \"ecf99794-3d40-40ee-be52-3dde0c03e8fc\") " Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.234957 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf99794-3d40-40ee-be52-3dde0c03e8fc-kube-api-access-qm6lx" (OuterVolumeSpecName: "kube-api-access-qm6lx") pod "ecf99794-3d40-40ee-be52-3dde0c03e8fc" (UID: "ecf99794-3d40-40ee-be52-3dde0c03e8fc"). InnerVolumeSpecName "kube-api-access-qm6lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.330319 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm6lx\" (UniqueName: \"kubernetes.io/projected/ecf99794-3d40-40ee-be52-3dde0c03e8fc-kube-api-access-qm6lx\") on node \"crc\" DevicePath \"\"" Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.427624 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf99794-3d40-40ee-be52-3dde0c03e8fc" path="/var/lib/kubelet/pods/ecf99794-3d40-40ee-be52-3dde0c03e8fc/volumes" Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.672532 4772 scope.go:117] "RemoveContainer" containerID="c562022b653e47d2901944d7d6ae1add4a28d57c8e193423f6ead2195a995896" Nov 22 12:03:29 crc kubenswrapper[4772]: I1122 12:03:29.672768 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 12:03:40 crc kubenswrapper[4772]: I1122 12:03:40.413636 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:03:40 crc kubenswrapper[4772]: E1122 12:03:40.414402 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:03:55 crc kubenswrapper[4772]: I1122 12:03:55.414730 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:03:55 crc kubenswrapper[4772]: E1122 12:03:55.416171 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.775107 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 12:04:06 crc kubenswrapper[4772]: E1122 12:04:06.776161 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf99794-3d40-40ee-be52-3dde0c03e8fc" containerName="mariadb-client" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.776177 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf99794-3d40-40ee-be52-3dde0c03e8fc" containerName="mariadb-client" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.776368 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf99794-3d40-40ee-be52-3dde0c03e8fc" containerName="mariadb-client" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.777292 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.780236 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-m2lxw" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.780246 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.780261 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.784481 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.786245 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.790980 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.793090 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.816562 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.824332 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.831501 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.872688 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458e152d-811c-44c7-913f-054116c0523d-config\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.872752 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/458e152d-811c-44c7-913f-054116c0523d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.872786 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ce39ce-1767-417d-92bc-77f78e1df6a8-config\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.872806 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b2e69ae-0817-4c5e-9b7d-93196d354047-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.872829 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2e69ae-0817-4c5e-9b7d-93196d354047-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.872851 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2e69ae-0817-4c5e-9b7d-93196d354047-config\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.872875 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv9lq\" (UniqueName: \"kubernetes.io/projected/458e152d-811c-44c7-913f-054116c0523d-kube-api-access-sv9lq\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873024 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ce39ce-1767-417d-92bc-77f78e1df6a8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873100 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/458e152d-811c-44c7-913f-054116c0523d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873125 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/06ce39ce-1767-417d-92bc-77f78e1df6a8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873209 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873245 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ce39ce-1767-417d-92bc-77f78e1df6a8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873343 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73939e10-605e-496b-b7ba-44bb58a9e622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73939e10-605e-496b-b7ba-44bb58a9e622\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873493 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e69ae-0817-4c5e-9b7d-93196d354047-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873525 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873594 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458e152d-811c-44c7-913f-054116c0523d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873626 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4n5\" (UniqueName: \"kubernetes.io/projected/0b2e69ae-0817-4c5e-9b7d-93196d354047-kube-api-access-wb4n5\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.873653 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgpz4\" (UniqueName: \"kubernetes.io/projected/06ce39ce-1767-417d-92bc-77f78e1df6a8-kube-api-access-tgpz4\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.971252 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.972614 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ce39ce-1767-417d-92bc-77f78e1df6a8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/458e152d-811c-44c7-913f-054116c0523d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975348 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/06ce39ce-1767-417d-92bc-77f78e1df6a8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975432 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ce39ce-1767-417d-92bc-77f78e1df6a8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975470 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73939e10-605e-496b-b7ba-44bb58a9e622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73939e10-605e-496b-b7ba-44bb58a9e622\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975541 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e69ae-0817-4c5e-9b7d-93196d354047-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975580 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458e152d-811c-44c7-913f-054116c0523d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975660 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4n5\" (UniqueName: \"kubernetes.io/projected/0b2e69ae-0817-4c5e-9b7d-93196d354047-kube-api-access-wb4n5\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975702 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgpz4\" (UniqueName: \"kubernetes.io/projected/06ce39ce-1767-417d-92bc-77f78e1df6a8-kube-api-access-tgpz4\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975736 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458e152d-811c-44c7-913f-054116c0523d-config\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/458e152d-811c-44c7-913f-054116c0523d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975812 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ce39ce-1767-417d-92bc-77f78e1df6a8-config\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975839 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b2e69ae-0817-4c5e-9b7d-93196d354047-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975869 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2e69ae-0817-4c5e-9b7d-93196d354047-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2e69ae-0817-4c5e-9b7d-93196d354047-config\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.975950 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv9lq\" (UniqueName: \"kubernetes.io/projected/458e152d-811c-44c7-913f-054116c0523d-kube-api-access-sv9lq\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.977241 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.977605 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.977759 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-flktp" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.981816 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/06ce39ce-1767-417d-92bc-77f78e1df6a8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.982374 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/458e152d-811c-44c7-913f-054116c0523d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.982744 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/458e152d-811c-44c7-913f-054116c0523d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.983435 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458e152d-811c-44c7-913f-054116c0523d-config\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.983830 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b2e69ae-0817-4c5e-9b7d-93196d354047-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.984572 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ce39ce-1767-417d-92bc-77f78e1df6a8-config\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.984825 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e69ae-0817-4c5e-9b7d-93196d354047-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.985920 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ce39ce-1767-417d-92bc-77f78e1df6a8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.987010 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.987952 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2e69ae-0817-4c5e-9b7d-93196d354047-config\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.990478 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.995526 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.996992 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2e69ae-0817-4c5e-9b7d-93196d354047-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.997122 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458e152d-811c-44c7-913f-054116c0523d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.997899 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ce39ce-1767-417d-92bc-77f78e1df6a8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.997956 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.998156 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8252dd8b7f2bdb1c35cc208aaf3b3cff9c2fbd4cef2f3cfbb2b9fdb01c9794a7/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:06 crc kubenswrapper[4772]: I1122 12:04:06.997965 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.047676 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv9lq\" (UniqueName: \"kubernetes.io/projected/458e152d-811c-44c7-913f-054116c0523d-kube-api-access-sv9lq\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.051694 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.051743 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/690945b2dc43df42d88f4d7dd815ff3558226e08f3d2e4cb674101e0cc30c842/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.052606 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4n5\" (UniqueName: \"kubernetes.io/projected/0b2e69ae-0817-4c5e-9b7d-93196d354047-kube-api-access-wb4n5\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:06.998446 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73939e10-605e-496b-b7ba-44bb58a9e622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73939e10-605e-496b-b7ba-44bb58a9e622\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/10dd733447f40a49672de5587d5ae7de9f0d32901a6e2ec34f4e03376bf2b5fe/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.053815 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.054748 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgpz4\" (UniqueName: \"kubernetes.io/projected/06ce39ce-1767-417d-92bc-77f78e1df6a8-kube-api-access-tgpz4\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.094969 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.111308 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ed3fb9f-9497-41ba-b981-5224000d0e1e\") pod \"ovsdbserver-nb-2\" (UID: \"0b2e69ae-0817-4c5e-9b7d-93196d354047\") " pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.112015 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37d02855-2e18-4ac2-bb3c-1b92d0cb23ea\") pod \"ovsdbserver-nb-0\" (UID: \"458e152d-811c-44c7-913f-054116c0523d\") " pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.114286 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.119726 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.121814 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.122976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73939e10-605e-496b-b7ba-44bb58a9e622\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73939e10-605e-496b-b7ba-44bb58a9e622\") pod \"ovsdbserver-nb-1\" (UID: \"06ce39ce-1767-417d-92bc-77f78e1df6a8\") " pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.128453 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.190526 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191206 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/513b4b3b-e546-414e-ac74-f7730e8db4d1-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191243 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss2vc\" (UniqueName: \"kubernetes.io/projected/82717dcc-f9bd-40e8-8125-710e5a3f9374-kube-api-access-ss2vc\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191282 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efafb1c9-8e1e-41ae-8d18-04588d896fc4-config\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191311 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6dba0471-8020-47ca-874a-81e6eae145b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6dba0471-8020-47ca-874a-81e6eae145b8\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191397 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82717dcc-f9bd-40e8-8125-710e5a3f9374-config\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191436 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82717dcc-f9bd-40e8-8125-710e5a3f9374-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191464 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efafb1c9-8e1e-41ae-8d18-04588d896fc4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191492 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82717dcc-f9bd-40e8-8125-710e5a3f9374-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191516 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ztbz\" (UniqueName: \"kubernetes.io/projected/efafb1c9-8e1e-41ae-8d18-04588d896fc4-kube-api-access-6ztbz\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191543 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efafb1c9-8e1e-41ae-8d18-04588d896fc4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191590 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513b4b3b-e546-414e-ac74-f7730e8db4d1-config\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513b4b3b-e546-414e-ac74-f7730e8db4d1-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191644 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191692 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9j87\" (UniqueName: \"kubernetes.io/projected/513b4b3b-e546-414e-ac74-f7730e8db4d1-kube-api-access-v9j87\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191730 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82717dcc-f9bd-40e8-8125-710e5a3f9374-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191760 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/513b4b3b-e546-414e-ac74-f7730e8db4d1-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.191791 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efafb1c9-8e1e-41ae-8d18-04588d896fc4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293473 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82717dcc-f9bd-40e8-8125-710e5a3f9374-config\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293523 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82717dcc-f9bd-40e8-8125-710e5a3f9374-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293541 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efafb1c9-8e1e-41ae-8d18-04588d896fc4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293564 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82717dcc-f9bd-40e8-8125-710e5a3f9374-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293604 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ztbz\" (UniqueName: \"kubernetes.io/projected/efafb1c9-8e1e-41ae-8d18-04588d896fc4-kube-api-access-6ztbz\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293630 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efafb1c9-8e1e-41ae-8d18-04588d896fc4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293651 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513b4b3b-e546-414e-ac74-f7730e8db4d1-config\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293670 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513b4b3b-e546-414e-ac74-f7730e8db4d1-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293692 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293739 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9j87\" (UniqueName: \"kubernetes.io/projected/513b4b3b-e546-414e-ac74-f7730e8db4d1-kube-api-access-v9j87\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293771 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82717dcc-f9bd-40e8-8125-710e5a3f9374-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293794 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/513b4b3b-e546-414e-ac74-f7730e8db4d1-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293827 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efafb1c9-8e1e-41ae-8d18-04588d896fc4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293859 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293936 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/513b4b3b-e546-414e-ac74-f7730e8db4d1-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293969 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss2vc\" (UniqueName: \"kubernetes.io/projected/82717dcc-f9bd-40e8-8125-710e5a3f9374-kube-api-access-ss2vc\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.293994 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efafb1c9-8e1e-41ae-8d18-04588d896fc4-config\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.294024 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6dba0471-8020-47ca-874a-81e6eae145b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6dba0471-8020-47ca-874a-81e6eae145b8\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.294304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efafb1c9-8e1e-41ae-8d18-04588d896fc4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.295139 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82717dcc-f9bd-40e8-8125-710e5a3f9374-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.295270 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513b4b3b-e546-414e-ac74-f7730e8db4d1-config\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.295651 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efafb1c9-8e1e-41ae-8d18-04588d896fc4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.295937 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efafb1c9-8e1e-41ae-8d18-04588d896fc4-config\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.296038 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/513b4b3b-e546-414e-ac74-f7730e8db4d1-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.296195 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82717dcc-f9bd-40e8-8125-710e5a3f9374-config\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.296211 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82717dcc-f9bd-40e8-8125-710e5a3f9374-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.298279 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.298473 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0ea95a04175f4ff367531162e69aa1cd4eee99f15b299f9cbcdb888e9e77c67c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.298594 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/513b4b3b-e546-414e-ac74-f7730e8db4d1-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.299471 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.299531 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ff28d7e4b66ee9704a6fc418dda1669979bf0cef5e2fb1b92841d825feaeee0/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.313462 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.313519 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6dba0471-8020-47ca-874a-81e6eae145b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6dba0471-8020-47ca-874a-81e6eae145b8\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74fbf5853a633bad1e5a47a86e452dd576489af79f7a119afa69b31c45df3d2b/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.315026 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513b4b3b-e546-414e-ac74-f7730e8db4d1-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.316846 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efafb1c9-8e1e-41ae-8d18-04588d896fc4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.319271 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ztbz\" (UniqueName: \"kubernetes.io/projected/efafb1c9-8e1e-41ae-8d18-04588d896fc4-kube-api-access-6ztbz\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.320304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82717dcc-f9bd-40e8-8125-710e5a3f9374-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.321411 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss2vc\" (UniqueName: \"kubernetes.io/projected/82717dcc-f9bd-40e8-8125-710e5a3f9374-kube-api-access-ss2vc\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.322766 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9j87\" (UniqueName: \"kubernetes.io/projected/513b4b3b-e546-414e-ac74-f7730e8db4d1-kube-api-access-v9j87\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.356958 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b78c4c3-e121-4030-a346-b5dee57ef0b5\") pod \"ovsdbserver-sb-0\" (UID: \"efafb1c9-8e1e-41ae-8d18-04588d896fc4\") " pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.361784 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f441cba2-6ce3-4cbf-9c56-06292af0a973\") pod \"ovsdbserver-sb-1\" (UID: \"82717dcc-f9bd-40e8-8125-710e5a3f9374\") " pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.381460 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6dba0471-8020-47ca-874a-81e6eae145b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6dba0471-8020-47ca-874a-81e6eae145b8\") pod \"ovsdbserver-sb-2\" (UID: \"513b4b3b-e546-414e-ac74-f7730e8db4d1\") " pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.403354 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.415297 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:04:07 crc kubenswrapper[4772]: E1122 12:04:07.415551 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.443368 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.452114 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.463195 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.746531 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.861477 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 12:04:07 crc kubenswrapper[4772]: I1122 12:04:07.979273 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 12:04:08 crc kubenswrapper[4772]: W1122 12:04:08.000418 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod458e152d_811c_44c7_913f_054116c0523d.slice/crio-3d393e9dbcf23c020bad65bec2c6b48a6c3cedbf27a3be4d76f1e03d9786d04d WatchSource:0}: Error finding container 3d393e9dbcf23c020bad65bec2c6b48a6c3cedbf27a3be4d76f1e03d9786d04d: Status 404 returned error can't find the container with id 3d393e9dbcf23c020bad65bec2c6b48a6c3cedbf27a3be4d76f1e03d9786d04d Nov 22 12:04:08 crc kubenswrapper[4772]: I1122 12:04:08.130450 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"06ce39ce-1767-417d-92bc-77f78e1df6a8","Type":"ContainerStarted","Data":"6353706ea1f00cb43557b35519ffc488e662ec9d392a14a0597952f231e7e2e1"} Nov 22 12:04:08 crc kubenswrapper[4772]: I1122 12:04:08.130971 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"06ce39ce-1767-417d-92bc-77f78e1df6a8","Type":"ContainerStarted","Data":"5ed00f6e994af8c42c7e6e949c6f7f0a8d7ae542ca750808435b3dfbb72ad52c"} Nov 22 12:04:08 crc kubenswrapper[4772]: I1122 12:04:08.138068 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"0b2e69ae-0817-4c5e-9b7d-93196d354047","Type":"ContainerStarted","Data":"a616b9419c36e826848002f9fada8c27c0923f67c433bef19cf11a9ec5a4d3b9"} Nov 22 12:04:08 crc kubenswrapper[4772]: I1122 12:04:08.138120 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"0b2e69ae-0817-4c5e-9b7d-93196d354047","Type":"ContainerStarted","Data":"cded005661426d4b17dd767004b2c5faac5afb2bbf61fd14ac21cb18b7beb3d6"} Nov 22 12:04:08 crc kubenswrapper[4772]: I1122 12:04:08.141569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"458e152d-811c-44c7-913f-054116c0523d","Type":"ContainerStarted","Data":"3d393e9dbcf23c020bad65bec2c6b48a6c3cedbf27a3be4d76f1e03d9786d04d"} Nov 22 12:04:08 crc kubenswrapper[4772]: I1122 12:04:08.174267 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 12:04:08 crc kubenswrapper[4772]: W1122 12:04:08.188058 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefafb1c9_8e1e_41ae_8d18_04588d896fc4.slice/crio-9f0155401cfbb7d368d1e219e97187a2de6621e5f9b673ba13637ed99e412990 WatchSource:0}: Error finding container 9f0155401cfbb7d368d1e219e97187a2de6621e5f9b673ba13637ed99e412990: Status 404 returned error can't find the container with id 9f0155401cfbb7d368d1e219e97187a2de6621e5f9b673ba13637ed99e412990 Nov 22 12:04:08 crc kubenswrapper[4772]: I1122 12:04:08.274006 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 12:04:08 crc kubenswrapper[4772]: W1122 12:04:08.282586 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82717dcc_f9bd_40e8_8125_710e5a3f9374.slice/crio-ff36139828937cb7991b4a89ce04be9d54cfb0ae2afce294d74d11f4891d1ee8 WatchSource:0}: Error finding container ff36139828937cb7991b4a89ce04be9d54cfb0ae2afce294d74d11f4891d1ee8: Status 404 returned error can't find the container with id ff36139828937cb7991b4a89ce04be9d54cfb0ae2afce294d74d11f4891d1ee8 Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.154030 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"82717dcc-f9bd-40e8-8125-710e5a3f9374","Type":"ContainerStarted","Data":"7ce71a1391314589d1e50f1fad764ca05bfb0c3b54c204ca7027870aa9a88489"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.154532 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"82717dcc-f9bd-40e8-8125-710e5a3f9374","Type":"ContainerStarted","Data":"70938a014674a272c8ab659a55cb6bcb850c73e5d1e67df67a466a35c5ee88bb"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.154547 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"82717dcc-f9bd-40e8-8125-710e5a3f9374","Type":"ContainerStarted","Data":"ff36139828937cb7991b4a89ce04be9d54cfb0ae2afce294d74d11f4891d1ee8"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.157407 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"0b2e69ae-0817-4c5e-9b7d-93196d354047","Type":"ContainerStarted","Data":"a34f8e5fa57c34e23e03954c95b2c8fe4c9a77b4134fdd480fad149ff69f41f9"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.161386 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"efafb1c9-8e1e-41ae-8d18-04588d896fc4","Type":"ContainerStarted","Data":"ae134b4f2fb40fe225a21bd6441dac0ff2aefe0cfd761a3ee9f867ce10d579a0"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.161443 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"efafb1c9-8e1e-41ae-8d18-04588d896fc4","Type":"ContainerStarted","Data":"21e6aecc0408f38bf176217fd6ee709a809169fefea6ef4d107c079573fd9978"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.161457 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"efafb1c9-8e1e-41ae-8d18-04588d896fc4","Type":"ContainerStarted","Data":"9f0155401cfbb7d368d1e219e97187a2de6621e5f9b673ba13637ed99e412990"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.165363 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"458e152d-811c-44c7-913f-054116c0523d","Type":"ContainerStarted","Data":"3d7a143c992bbeb5665b434d35af615437f56ddeed3650a8519d1335f92c5eed"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.165450 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"458e152d-811c-44c7-913f-054116c0523d","Type":"ContainerStarted","Data":"200a66b65d9a8e0c38bc39c2b97e12c53ab11a65d2b3a51501afe3cc67026e78"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.169955 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"06ce39ce-1767-417d-92bc-77f78e1df6a8","Type":"ContainerStarted","Data":"746d987135341f1ee9b21ed5c7610e7b935728d5b56ee65b302ac33121ab0299"} Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.179171 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.179148685 podStartE2EDuration="4.179148685s" podCreationTimestamp="2025-11-22 12:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:09.173470326 +0000 UTC m=+5169.412914820" watchObservedRunningTime="2025-11-22 12:04:09.179148685 +0000 UTC m=+5169.418593179" Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.193507 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.193486737 podStartE2EDuration="4.193486737s" podCreationTimestamp="2025-11-22 12:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:09.192407231 +0000 UTC m=+5169.431851735" watchObservedRunningTime="2025-11-22 12:04:09.193486737 +0000 UTC m=+5169.432931231" Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.212780 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.272254 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.272231222 podStartE2EDuration="4.272231222s" podCreationTimestamp="2025-11-22 12:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:09.227723038 +0000 UTC m=+5169.467167532" watchObservedRunningTime="2025-11-22 12:04:09.272231222 +0000 UTC m=+5169.511675716" Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.282992 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.282962395 podStartE2EDuration="4.282962395s" podCreationTimestamp="2025-11-22 12:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:09.260935834 +0000 UTC m=+5169.500380348" watchObservedRunningTime="2025-11-22 12:04:09.282962395 +0000 UTC m=+5169.522406889" Nov 22 12:04:09 crc kubenswrapper[4772]: I1122 12:04:09.301542 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.301520281 podStartE2EDuration="4.301520281s" podCreationTimestamp="2025-11-22 12:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:09.30065978 +0000 UTC m=+5169.540104274" watchObservedRunningTime="2025-11-22 12:04:09.301520281 +0000 UTC m=+5169.540964775" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.122265 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.129535 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.166298 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.189102 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"513b4b3b-e546-414e-ac74-f7730e8db4d1","Type":"ContainerStarted","Data":"cf1b4ee961cf05ef3b8174c2b8f2d7f5bc754800d482bd3bda7dcb98f74347b9"} Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.189154 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"513b4b3b-e546-414e-ac74-f7730e8db4d1","Type":"ContainerStarted","Data":"9e1d2f78a1423fcd2609753b65ee64c2b303452ce039b78aba54f4a7370fcebe"} Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.189165 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"513b4b3b-e546-414e-ac74-f7730e8db4d1","Type":"ContainerStarted","Data":"dc5c83b7014bc70c8716c1d7b765dcf3b3756380d6f7f8fade235d4d32aa5ef0"} Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.190927 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.216193 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=5.216158619 podStartE2EDuration="5.216158619s" podCreationTimestamp="2025-11-22 12:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:10.215455672 +0000 UTC m=+5170.454900176" watchObservedRunningTime="2025-11-22 12:04:10.216158619 +0000 UTC m=+5170.455603113" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.403944 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.444281 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.452946 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:10 crc kubenswrapper[4772]: I1122 12:04:10.464399 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.122784 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.190335 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.423231 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.443808 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.452751 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.464388 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.524209 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b94f8f99-4fxlp"] Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.526027 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.529548 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.542957 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b94f8f99-4fxlp"] Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.594062 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-ovsdbserver-nb\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.594180 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-dns-svc\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.594275 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc7w7\" (UniqueName: \"kubernetes.io/projected/70c63688-8dcf-4e8c-aff8-3b1b178db406-kube-api-access-xc7w7\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.594346 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-config\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.695852 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc7w7\" (UniqueName: \"kubernetes.io/projected/70c63688-8dcf-4e8c-aff8-3b1b178db406-kube-api-access-xc7w7\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.695930 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-config\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.695991 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-ovsdbserver-nb\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.696030 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-dns-svc\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.697082 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-dns-svc\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.697486 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-config\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.697666 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-ovsdbserver-nb\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.723723 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc7w7\" (UniqueName: \"kubernetes.io/projected/70c63688-8dcf-4e8c-aff8-3b1b178db406-kube-api-access-xc7w7\") pod \"dnsmasq-dns-5b94f8f99-4fxlp\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:12 crc kubenswrapper[4772]: I1122 12:04:12.856302 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.140798 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b94f8f99-4fxlp"] Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.197206 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.226698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" event={"ID":"70c63688-8dcf-4e8c-aff8-3b1b178db406","Type":"ContainerStarted","Data":"7c38a579c9504df9869d13246ca687332de6a49be889a7cde0a5756d4004fe25"} Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.284458 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.451033 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.494563 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.507583 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.508444 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.527007 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.600798 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 22 12:04:13 crc kubenswrapper[4772]: I1122 12:04:13.619943 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.095682 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b94f8f99-4fxlp"] Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.123268 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75d748cc57-4mt57"] Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.124810 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.130790 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.147164 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75d748cc57-4mt57"] Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.233322 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kfrp\" (UniqueName: \"kubernetes.io/projected/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-kube-api-access-8kfrp\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.233549 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-dns-svc\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.233702 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-config\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.233747 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-nb\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.233784 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-sb\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.237033 4772 generic.go:334] "Generic (PLEG): container finished" podID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerID="227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e" exitCode=0 Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.238154 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" event={"ID":"70c63688-8dcf-4e8c-aff8-3b1b178db406","Type":"ContainerDied","Data":"227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e"} Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.300980 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.335014 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-config\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.335125 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-nb\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.335147 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-sb\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.335201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kfrp\" (UniqueName: \"kubernetes.io/projected/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-kube-api-access-8kfrp\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.335257 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-dns-svc\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.352459 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-config\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.353865 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-nb\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.353968 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-sb\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.354101 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-dns-svc\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.360014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kfrp\" (UniqueName: \"kubernetes.io/projected/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-kube-api-access-8kfrp\") pod \"dnsmasq-dns-75d748cc57-4mt57\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.453785 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:14 crc kubenswrapper[4772]: I1122 12:04:14.933753 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75d748cc57-4mt57"] Nov 22 12:04:14 crc kubenswrapper[4772]: W1122 12:04:14.939222 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24373a98_48ad_4a03_a1c0_7b0c366f5a0a.slice/crio-e1bcc1e48309d0a664c754bb804865d75ac1adf28ce0d16c0199e3337cee3b99 WatchSource:0}: Error finding container e1bcc1e48309d0a664c754bb804865d75ac1adf28ce0d16c0199e3337cee3b99: Status 404 returned error can't find the container with id e1bcc1e48309d0a664c754bb804865d75ac1adf28ce0d16c0199e3337cee3b99 Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.281362 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" event={"ID":"70c63688-8dcf-4e8c-aff8-3b1b178db406","Type":"ContainerStarted","Data":"65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900"} Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.281474 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.283113 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" podUID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerName="dnsmasq-dns" containerID="cri-o://65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900" gracePeriod=10 Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.284278 4772 generic.go:334] "Generic (PLEG): container finished" podID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerID="361cd88685d6838aeb310ac0b42d962d77b470a17c9c916be0cba6f3e164baec" exitCode=0 Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.284312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" event={"ID":"24373a98-48ad-4a03-a1c0-7b0c366f5a0a","Type":"ContainerDied","Data":"361cd88685d6838aeb310ac0b42d962d77b470a17c9c916be0cba6f3e164baec"} Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.284348 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" event={"ID":"24373a98-48ad-4a03-a1c0-7b0c366f5a0a","Type":"ContainerStarted","Data":"e1bcc1e48309d0a664c754bb804865d75ac1adf28ce0d16c0199e3337cee3b99"} Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.316352 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" podStartSLOduration=3.316331808 podStartE2EDuration="3.316331808s" podCreationTimestamp="2025-11-22 12:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:15.310519376 +0000 UTC m=+5175.549963880" watchObservedRunningTime="2025-11-22 12:04:15.316331808 +0000 UTC m=+5175.555776302" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.699913 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.877438 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc7w7\" (UniqueName: \"kubernetes.io/projected/70c63688-8dcf-4e8c-aff8-3b1b178db406-kube-api-access-xc7w7\") pod \"70c63688-8dcf-4e8c-aff8-3b1b178db406\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.878461 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-config\") pod \"70c63688-8dcf-4e8c-aff8-3b1b178db406\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.878503 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-ovsdbserver-nb\") pod \"70c63688-8dcf-4e8c-aff8-3b1b178db406\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.878537 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-dns-svc\") pod \"70c63688-8dcf-4e8c-aff8-3b1b178db406\" (UID: \"70c63688-8dcf-4e8c-aff8-3b1b178db406\") " Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.884882 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c63688-8dcf-4e8c-aff8-3b1b178db406-kube-api-access-xc7w7" (OuterVolumeSpecName: "kube-api-access-xc7w7") pod "70c63688-8dcf-4e8c-aff8-3b1b178db406" (UID: "70c63688-8dcf-4e8c-aff8-3b1b178db406"). InnerVolumeSpecName "kube-api-access-xc7w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.927537 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70c63688-8dcf-4e8c-aff8-3b1b178db406" (UID: "70c63688-8dcf-4e8c-aff8-3b1b178db406"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.935587 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-config" (OuterVolumeSpecName: "config") pod "70c63688-8dcf-4e8c-aff8-3b1b178db406" (UID: "70c63688-8dcf-4e8c-aff8-3b1b178db406"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.948606 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70c63688-8dcf-4e8c-aff8-3b1b178db406" (UID: "70c63688-8dcf-4e8c-aff8-3b1b178db406"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.981876 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.981919 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.981933 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc7w7\" (UniqueName: \"kubernetes.io/projected/70c63688-8dcf-4e8c-aff8-3b1b178db406-kube-api-access-xc7w7\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:15 crc kubenswrapper[4772]: I1122 12:04:15.981949 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c63688-8dcf-4e8c-aff8-3b1b178db406-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.298696 4772 generic.go:334] "Generic (PLEG): container finished" podID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerID="65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900" exitCode=0 Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.298800 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" event={"ID":"70c63688-8dcf-4e8c-aff8-3b1b178db406","Type":"ContainerDied","Data":"65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900"} Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.298841 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" event={"ID":"70c63688-8dcf-4e8c-aff8-3b1b178db406","Type":"ContainerDied","Data":"7c38a579c9504df9869d13246ca687332de6a49be889a7cde0a5756d4004fe25"} Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.298897 4772 scope.go:117] "RemoveContainer" containerID="65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.299159 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b94f8f99-4fxlp" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.303529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" event={"ID":"24373a98-48ad-4a03-a1c0-7b0c366f5a0a","Type":"ContainerStarted","Data":"b4b8758e0d60fae224833126fa7861be41daeab8f3f7075378ebb7a58bd59b76"} Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.303919 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.328699 4772 scope.go:117] "RemoveContainer" containerID="227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.358732 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" podStartSLOduration=2.358675995 podStartE2EDuration="2.358675995s" podCreationTimestamp="2025-11-22 12:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:16.34914581 +0000 UTC m=+5176.588590344" watchObservedRunningTime="2025-11-22 12:04:16.358675995 +0000 UTC m=+5176.598120579" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.370648 4772 scope.go:117] "RemoveContainer" containerID="65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.379963 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b94f8f99-4fxlp"] Nov 22 12:04:16 crc kubenswrapper[4772]: E1122 12:04:16.380431 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900\": container with ID starting with 65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900 not found: ID does not exist" containerID="65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.380521 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900"} err="failed to get container status \"65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900\": rpc error: code = NotFound desc = could not find container \"65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900\": container with ID starting with 65018d3dcbc0fac5c5e4348f99f8c493989f17714badfaf0618ed25c87ebf900 not found: ID does not exist" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.380610 4772 scope.go:117] "RemoveContainer" containerID="227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e" Nov 22 12:04:16 crc kubenswrapper[4772]: E1122 12:04:16.381456 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e\": container with ID starting with 227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e not found: ID does not exist" containerID="227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.381565 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e"} err="failed to get container status \"227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e\": rpc error: code = NotFound desc = could not find container \"227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e\": container with ID starting with 227f9547d5aea22958d27a2208274e3bc651b097d26ce19cb48eda88edf3af9e not found: ID does not exist" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.385935 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b94f8f99-4fxlp"] Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.807456 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 22 12:04:16 crc kubenswrapper[4772]: E1122 12:04:16.808248 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerName="init" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.808350 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerName="init" Nov 22 12:04:16 crc kubenswrapper[4772]: E1122 12:04:16.808449 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerName="dnsmasq-dns" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.808503 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerName="dnsmasq-dns" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.809089 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c63688-8dcf-4e8c-aff8-3b1b178db406" containerName="dnsmasq-dns" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.809992 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.821578 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 12:04:16 crc kubenswrapper[4772]: I1122 12:04:16.833956 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.003580 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/345dc3d0-626f-4aba-a03f-57583bea5a5e-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.003689 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c758574d-a379-4c17-80b4-b2e392254542\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c758574d-a379-4c17-80b4-b2e392254542\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.003929 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278vf\" (UniqueName: \"kubernetes.io/projected/345dc3d0-626f-4aba-a03f-57583bea5a5e-kube-api-access-278vf\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.106601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278vf\" (UniqueName: \"kubernetes.io/projected/345dc3d0-626f-4aba-a03f-57583bea5a5e-kube-api-access-278vf\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.106732 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/345dc3d0-626f-4aba-a03f-57583bea5a5e-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.106787 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c758574d-a379-4c17-80b4-b2e392254542\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c758574d-a379-4c17-80b4-b2e392254542\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.111194 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.111268 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c758574d-a379-4c17-80b4-b2e392254542\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c758574d-a379-4c17-80b4-b2e392254542\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/599509114defd9d65ec17e43dc10efd59d2574b42c21ff30fb55b3ed5ac6adb1/globalmount\"" pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.115810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/345dc3d0-626f-4aba-a03f-57583bea5a5e-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.143289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278vf\" (UniqueName: \"kubernetes.io/projected/345dc3d0-626f-4aba-a03f-57583bea5a5e-kube-api-access-278vf\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.174683 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c758574d-a379-4c17-80b4-b2e392254542\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c758574d-a379-4c17-80b4-b2e392254542\") pod \"ovn-copy-data\" (UID: \"345dc3d0-626f-4aba-a03f-57583bea5a5e\") " pod="openstack/ovn-copy-data" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.423929 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c63688-8dcf-4e8c-aff8-3b1b178db406" path="/var/lib/kubelet/pods/70c63688-8dcf-4e8c-aff8-3b1b178db406/volumes" Nov 22 12:04:17 crc kubenswrapper[4772]: I1122 12:04:17.458980 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 12:04:18 crc kubenswrapper[4772]: I1122 12:04:18.163268 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 12:04:18 crc kubenswrapper[4772]: I1122 12:04:18.340066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"345dc3d0-626f-4aba-a03f-57583bea5a5e","Type":"ContainerStarted","Data":"0a9bd9fae09a8fe98ef90072a7fde4e7d21c8cc573a3ed2f0799c23e9097d7e2"} Nov 22 12:04:19 crc kubenswrapper[4772]: I1122 12:04:19.384661 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"345dc3d0-626f-4aba-a03f-57583bea5a5e","Type":"ContainerStarted","Data":"306d0e200e317f9e1270705f6d6517622748f34805e4336e4daff5926362e51b"} Nov 22 12:04:19 crc kubenswrapper[4772]: I1122 12:04:19.422044 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=4.422013747 podStartE2EDuration="4.422013747s" podCreationTimestamp="2025-11-22 12:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:19.407970552 +0000 UTC m=+5179.647415066" watchObservedRunningTime="2025-11-22 12:04:19.422013747 +0000 UTC m=+5179.661458261" Nov 22 12:04:19 crc kubenswrapper[4772]: I1122 12:04:19.427884 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:04:19 crc kubenswrapper[4772]: E1122 12:04:19.429876 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:04:24 crc kubenswrapper[4772]: I1122 12:04:24.455323 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:04:24 crc kubenswrapper[4772]: I1122 12:04:24.555840 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-n5jtk"] Nov 22 12:04:24 crc kubenswrapper[4772]: I1122 12:04:24.556898 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" podUID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerName="dnsmasq-dns" containerID="cri-o://380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403" gracePeriod=10 Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.049736 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.053905 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.060502 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.060844 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-hfxd5" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.061948 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.065162 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.071603 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.193419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-config\") pod \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.193806 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzn6m\" (UniqueName: \"kubernetes.io/projected/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-kube-api-access-tzn6m\") pod \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.193832 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-dns-svc\") pod \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\" (UID: \"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe\") " Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.194324 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312badf7-77eb-4792-9139-8f06dec2e2ff-config\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.194360 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312badf7-77eb-4792-9139-8f06dec2e2ff-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.194380 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/312badf7-77eb-4792-9139-8f06dec2e2ff-scripts\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.194412 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psx9d\" (UniqueName: \"kubernetes.io/projected/312badf7-77eb-4792-9139-8f06dec2e2ff-kube-api-access-psx9d\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.194443 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/312badf7-77eb-4792-9139-8f06dec2e2ff-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.200829 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-kube-api-access-tzn6m" (OuterVolumeSpecName: "kube-api-access-tzn6m") pod "c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" (UID: "c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe"). InnerVolumeSpecName "kube-api-access-tzn6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.246434 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" (UID: "c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.246817 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-config" (OuterVolumeSpecName: "config") pod "c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" (UID: "c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.296980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312badf7-77eb-4792-9139-8f06dec2e2ff-config\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297076 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312badf7-77eb-4792-9139-8f06dec2e2ff-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297096 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/312badf7-77eb-4792-9139-8f06dec2e2ff-scripts\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297135 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psx9d\" (UniqueName: \"kubernetes.io/projected/312badf7-77eb-4792-9139-8f06dec2e2ff-kube-api-access-psx9d\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297189 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/312badf7-77eb-4792-9139-8f06dec2e2ff-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297303 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzn6m\" (UniqueName: \"kubernetes.io/projected/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-kube-api-access-tzn6m\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297315 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297325 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.297960 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/312badf7-77eb-4792-9139-8f06dec2e2ff-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.298660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/312badf7-77eb-4792-9139-8f06dec2e2ff-scripts\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.298861 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312badf7-77eb-4792-9139-8f06dec2e2ff-config\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.303180 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312badf7-77eb-4792-9139-8f06dec2e2ff-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.321929 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psx9d\" (UniqueName: \"kubernetes.io/projected/312badf7-77eb-4792-9139-8f06dec2e2ff-kube-api-access-psx9d\") pod \"ovn-northd-0\" (UID: \"312badf7-77eb-4792-9139-8f06dec2e2ff\") " pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.401130 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.496577 4772 generic.go:334] "Generic (PLEG): container finished" podID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerID="380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403" exitCode=0 Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.496750 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.496783 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" event={"ID":"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe","Type":"ContainerDied","Data":"380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403"} Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.497022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-n5jtk" event={"ID":"c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe","Type":"ContainerDied","Data":"7408adec87f22cf8bb4f9e76ce0aa809606cf71105d326d90910c7cf4fadfe6e"} Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.497139 4772 scope.go:117] "RemoveContainer" containerID="380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.537078 4772 scope.go:117] "RemoveContainer" containerID="5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.538691 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-n5jtk"] Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.544981 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-n5jtk"] Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.561117 4772 scope.go:117] "RemoveContainer" containerID="380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403" Nov 22 12:04:25 crc kubenswrapper[4772]: E1122 12:04:25.561737 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403\": container with ID starting with 380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403 not found: ID does not exist" containerID="380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.561872 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403"} err="failed to get container status \"380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403\": rpc error: code = NotFound desc = could not find container \"380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403\": container with ID starting with 380fb89ad5dffb56956686f16e51021d837fbcc81a91edb1671f3b67a9667403 not found: ID does not exist" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.561984 4772 scope.go:117] "RemoveContainer" containerID="5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a" Nov 22 12:04:25 crc kubenswrapper[4772]: E1122 12:04:25.562426 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a\": container with ID starting with 5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a not found: ID does not exist" containerID="5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.562468 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a"} err="failed to get container status \"5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a\": rpc error: code = NotFound desc = could not find container \"5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a\": container with ID starting with 5312876a6688755a3a0baf861b9d9522b30c9a2dc6061a9d92de84b33370389a not found: ID does not exist" Nov 22 12:04:25 crc kubenswrapper[4772]: I1122 12:04:25.999282 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 12:04:26 crc kubenswrapper[4772]: W1122 12:04:26.009414 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312badf7_77eb_4792_9139_8f06dec2e2ff.slice/crio-768607c1685ba2a42b8c7258df998335506848df938f2004373711a937bfeda6 WatchSource:0}: Error finding container 768607c1685ba2a42b8c7258df998335506848df938f2004373711a937bfeda6: Status 404 returned error can't find the container with id 768607c1685ba2a42b8c7258df998335506848df938f2004373711a937bfeda6 Nov 22 12:04:26 crc kubenswrapper[4772]: I1122 12:04:26.515321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"312badf7-77eb-4792-9139-8f06dec2e2ff","Type":"ContainerStarted","Data":"ca23cdca662f9e3bc1ce370f9296f3ca6c77f3831479509775b7474172187926"} Nov 22 12:04:26 crc kubenswrapper[4772]: I1122 12:04:26.515907 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"312badf7-77eb-4792-9139-8f06dec2e2ff","Type":"ContainerStarted","Data":"287e493405a83a4ea0f563a067221a19f06e2ee810e3351ca71d7e1d44cc12a7"} Nov 22 12:04:26 crc kubenswrapper[4772]: I1122 12:04:26.515937 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 12:04:26 crc kubenswrapper[4772]: I1122 12:04:26.515950 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"312badf7-77eb-4792-9139-8f06dec2e2ff","Type":"ContainerStarted","Data":"768607c1685ba2a42b8c7258df998335506848df938f2004373711a937bfeda6"} Nov 22 12:04:27 crc kubenswrapper[4772]: I1122 12:04:27.430440 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" path="/var/lib/kubelet/pods/c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe/volumes" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.178311 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.178271891 podStartE2EDuration="5.178271891s" podCreationTimestamp="2025-11-22 12:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:26.544744842 +0000 UTC m=+5186.784189366" watchObservedRunningTime="2025-11-22 12:04:30.178271891 +0000 UTC m=+5190.417716385" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.185631 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-s96cr"] Nov 22 12:04:30 crc kubenswrapper[4772]: E1122 12:04:30.186129 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerName="init" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.186172 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerName="init" Nov 22 12:04:30 crc kubenswrapper[4772]: E1122 12:04:30.186216 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerName="dnsmasq-dns" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.186230 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerName="dnsmasq-dns" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.189999 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82f4e8a-aee4-4ab6-b5e1-9f3930393cfe" containerName="dnsmasq-dns" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.190939 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s96cr" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.198286 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s96cr"] Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.325560 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthsq\" (UniqueName: \"kubernetes.io/projected/11249242-2e3b-451e-bc6b-a23827e5daf0-kube-api-access-vthsq\") pod \"keystone-db-create-s96cr\" (UID: \"11249242-2e3b-451e-bc6b-a23827e5daf0\") " pod="openstack/keystone-db-create-s96cr" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.427186 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthsq\" (UniqueName: \"kubernetes.io/projected/11249242-2e3b-451e-bc6b-a23827e5daf0-kube-api-access-vthsq\") pod \"keystone-db-create-s96cr\" (UID: \"11249242-2e3b-451e-bc6b-a23827e5daf0\") " pod="openstack/keystone-db-create-s96cr" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.449503 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthsq\" (UniqueName: \"kubernetes.io/projected/11249242-2e3b-451e-bc6b-a23827e5daf0-kube-api-access-vthsq\") pod \"keystone-db-create-s96cr\" (UID: \"11249242-2e3b-451e-bc6b-a23827e5daf0\") " pod="openstack/keystone-db-create-s96cr" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.522898 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s96cr" Nov 22 12:04:30 crc kubenswrapper[4772]: I1122 12:04:30.834310 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s96cr"] Nov 22 12:04:31 crc kubenswrapper[4772]: I1122 12:04:31.576443 4772 generic.go:334] "Generic (PLEG): container finished" podID="11249242-2e3b-451e-bc6b-a23827e5daf0" containerID="703c92b95c6dc77be51f836660e4fd53118b0e3e32d96042c5a2f6e36489339e" exitCode=0 Nov 22 12:04:31 crc kubenswrapper[4772]: I1122 12:04:31.576636 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s96cr" event={"ID":"11249242-2e3b-451e-bc6b-a23827e5daf0","Type":"ContainerDied","Data":"703c92b95c6dc77be51f836660e4fd53118b0e3e32d96042c5a2f6e36489339e"} Nov 22 12:04:31 crc kubenswrapper[4772]: I1122 12:04:31.577153 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s96cr" event={"ID":"11249242-2e3b-451e-bc6b-a23827e5daf0","Type":"ContainerStarted","Data":"c64dc38c2ab89789a7ede51d510967e1c9e22fffec6c25a0c9b2c5db2e011b13"} Nov 22 12:04:32 crc kubenswrapper[4772]: I1122 12:04:32.414086 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:04:32 crc kubenswrapper[4772]: E1122 12:04:32.414687 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:04:33 crc kubenswrapper[4772]: I1122 12:04:33.040859 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s96cr" Nov 22 12:04:33 crc kubenswrapper[4772]: I1122 12:04:33.191532 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vthsq\" (UniqueName: \"kubernetes.io/projected/11249242-2e3b-451e-bc6b-a23827e5daf0-kube-api-access-vthsq\") pod \"11249242-2e3b-451e-bc6b-a23827e5daf0\" (UID: \"11249242-2e3b-451e-bc6b-a23827e5daf0\") " Nov 22 12:04:33 crc kubenswrapper[4772]: I1122 12:04:33.204609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11249242-2e3b-451e-bc6b-a23827e5daf0-kube-api-access-vthsq" (OuterVolumeSpecName: "kube-api-access-vthsq") pod "11249242-2e3b-451e-bc6b-a23827e5daf0" (UID: "11249242-2e3b-451e-bc6b-a23827e5daf0"). InnerVolumeSpecName "kube-api-access-vthsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:04:33 crc kubenswrapper[4772]: I1122 12:04:33.294314 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vthsq\" (UniqueName: \"kubernetes.io/projected/11249242-2e3b-451e-bc6b-a23827e5daf0-kube-api-access-vthsq\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:33 crc kubenswrapper[4772]: I1122 12:04:33.610259 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s96cr" event={"ID":"11249242-2e3b-451e-bc6b-a23827e5daf0","Type":"ContainerDied","Data":"c64dc38c2ab89789a7ede51d510967e1c9e22fffec6c25a0c9b2c5db2e011b13"} Nov 22 12:04:33 crc kubenswrapper[4772]: I1122 12:04:33.610336 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64dc38c2ab89789a7ede51d510967e1c9e22fffec6c25a0c9b2c5db2e011b13" Nov 22 12:04:33 crc kubenswrapper[4772]: I1122 12:04:33.610339 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s96cr" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.196566 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d6b6-account-create-nj5xv"] Nov 22 12:04:40 crc kubenswrapper[4772]: E1122 12:04:40.197292 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11249242-2e3b-451e-bc6b-a23827e5daf0" containerName="mariadb-database-create" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.197305 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="11249242-2e3b-451e-bc6b-a23827e5daf0" containerName="mariadb-database-create" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.197459 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="11249242-2e3b-451e-bc6b-a23827e5daf0" containerName="mariadb-database-create" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.204134 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6b6-account-create-nj5xv" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.211866 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.212619 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d6b6-account-create-nj5xv"] Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.350679 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2gjp\" (UniqueName: \"kubernetes.io/projected/d0450b34-6387-427c-8fc6-51dd089d6da1-kube-api-access-g2gjp\") pod \"keystone-d6b6-account-create-nj5xv\" (UID: \"d0450b34-6387-427c-8fc6-51dd089d6da1\") " pod="openstack/keystone-d6b6-account-create-nj5xv" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.452570 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2gjp\" (UniqueName: \"kubernetes.io/projected/d0450b34-6387-427c-8fc6-51dd089d6da1-kube-api-access-g2gjp\") pod \"keystone-d6b6-account-create-nj5xv\" (UID: \"d0450b34-6387-427c-8fc6-51dd089d6da1\") " pod="openstack/keystone-d6b6-account-create-nj5xv" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.475550 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2gjp\" (UniqueName: \"kubernetes.io/projected/d0450b34-6387-427c-8fc6-51dd089d6da1-kube-api-access-g2gjp\") pod \"keystone-d6b6-account-create-nj5xv\" (UID: \"d0450b34-6387-427c-8fc6-51dd089d6da1\") " pod="openstack/keystone-d6b6-account-create-nj5xv" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.640252 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6b6-account-create-nj5xv" Nov 22 12:04:40 crc kubenswrapper[4772]: I1122 12:04:40.804713 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 12:04:41 crc kubenswrapper[4772]: I1122 12:04:41.268756 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d6b6-account-create-nj5xv"] Nov 22 12:04:41 crc kubenswrapper[4772]: I1122 12:04:41.837660 4772 generic.go:334] "Generic (PLEG): container finished" podID="d0450b34-6387-427c-8fc6-51dd089d6da1" containerID="798f75d6f54b9011bfa5eb56966f2e0e71e3b2a793c1524629353dba0924a77a" exitCode=0 Nov 22 12:04:41 crc kubenswrapper[4772]: I1122 12:04:41.837779 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d6b6-account-create-nj5xv" event={"ID":"d0450b34-6387-427c-8fc6-51dd089d6da1","Type":"ContainerDied","Data":"798f75d6f54b9011bfa5eb56966f2e0e71e3b2a793c1524629353dba0924a77a"} Nov 22 12:04:41 crc kubenswrapper[4772]: I1122 12:04:41.838089 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d6b6-account-create-nj5xv" event={"ID":"d0450b34-6387-427c-8fc6-51dd089d6da1","Type":"ContainerStarted","Data":"dbdbf23e08937e7f1bedae390480d88f53921431db0ac08f1c2ee86a354bf5ad"} Nov 22 12:04:43 crc kubenswrapper[4772]: I1122 12:04:43.278622 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6b6-account-create-nj5xv" Nov 22 12:04:43 crc kubenswrapper[4772]: I1122 12:04:43.399167 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2gjp\" (UniqueName: \"kubernetes.io/projected/d0450b34-6387-427c-8fc6-51dd089d6da1-kube-api-access-g2gjp\") pod \"d0450b34-6387-427c-8fc6-51dd089d6da1\" (UID: \"d0450b34-6387-427c-8fc6-51dd089d6da1\") " Nov 22 12:04:43 crc kubenswrapper[4772]: I1122 12:04:43.409114 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0450b34-6387-427c-8fc6-51dd089d6da1-kube-api-access-g2gjp" (OuterVolumeSpecName: "kube-api-access-g2gjp") pod "d0450b34-6387-427c-8fc6-51dd089d6da1" (UID: "d0450b34-6387-427c-8fc6-51dd089d6da1"). InnerVolumeSpecName "kube-api-access-g2gjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:04:43 crc kubenswrapper[4772]: I1122 12:04:43.501208 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2gjp\" (UniqueName: \"kubernetes.io/projected/d0450b34-6387-427c-8fc6-51dd089d6da1-kube-api-access-g2gjp\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:43 crc kubenswrapper[4772]: I1122 12:04:43.868256 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d6b6-account-create-nj5xv" event={"ID":"d0450b34-6387-427c-8fc6-51dd089d6da1","Type":"ContainerDied","Data":"dbdbf23e08937e7f1bedae390480d88f53921431db0ac08f1c2ee86a354bf5ad"} Nov 22 12:04:43 crc kubenswrapper[4772]: I1122 12:04:43.868625 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbdbf23e08937e7f1bedae390480d88f53921431db0ac08f1c2ee86a354bf5ad" Nov 22 12:04:43 crc kubenswrapper[4772]: I1122 12:04:43.868391 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6b6-account-create-nj5xv" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.668881 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ntzc2"] Nov 22 12:04:45 crc kubenswrapper[4772]: E1122 12:04:45.669620 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0450b34-6387-427c-8fc6-51dd089d6da1" containerName="mariadb-account-create" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.669634 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0450b34-6387-427c-8fc6-51dd089d6da1" containerName="mariadb-account-create" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.669804 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0450b34-6387-427c-8fc6-51dd089d6da1" containerName="mariadb-account-create" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.670387 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.672600 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.673441 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.675320 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.675886 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q2nx7" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.676440 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ntzc2"] Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.737677 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-combined-ca-bundle\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.738175 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-config-data\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.738421 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4p8g\" (UniqueName: \"kubernetes.io/projected/c080996a-73b3-4f39-a9d8-5912f7d73ab1-kube-api-access-w4p8g\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.840039 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-config-data\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.840446 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4p8g\" (UniqueName: \"kubernetes.io/projected/c080996a-73b3-4f39-a9d8-5912f7d73ab1-kube-api-access-w4p8g\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.840583 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-combined-ca-bundle\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.847118 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-combined-ca-bundle\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.847995 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-config-data\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.873038 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4p8g\" (UniqueName: \"kubernetes.io/projected/c080996a-73b3-4f39-a9d8-5912f7d73ab1-kube-api-access-w4p8g\") pod \"keystone-db-sync-ntzc2\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:45 crc kubenswrapper[4772]: I1122 12:04:45.990634 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:46 crc kubenswrapper[4772]: I1122 12:04:46.414453 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:04:46 crc kubenswrapper[4772]: E1122 12:04:46.415374 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:04:46 crc kubenswrapper[4772]: I1122 12:04:46.469963 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ntzc2"] Nov 22 12:04:46 crc kubenswrapper[4772]: W1122 12:04:46.489507 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc080996a_73b3_4f39_a9d8_5912f7d73ab1.slice/crio-85632cff2694929a87e95459d9e54f197376985d875faa26372c142ba5f0b7b6 WatchSource:0}: Error finding container 85632cff2694929a87e95459d9e54f197376985d875faa26372c142ba5f0b7b6: Status 404 returned error can't find the container with id 85632cff2694929a87e95459d9e54f197376985d875faa26372c142ba5f0b7b6 Nov 22 12:04:46 crc kubenswrapper[4772]: I1122 12:04:46.958805 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ntzc2" event={"ID":"c080996a-73b3-4f39-a9d8-5912f7d73ab1","Type":"ContainerStarted","Data":"c9837135adc76952a9076ea6373f35b26f1757d8533080c158eb817f367dc6cd"} Nov 22 12:04:46 crc kubenswrapper[4772]: I1122 12:04:46.959282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ntzc2" event={"ID":"c080996a-73b3-4f39-a9d8-5912f7d73ab1","Type":"ContainerStarted","Data":"85632cff2694929a87e95459d9e54f197376985d875faa26372c142ba5f0b7b6"} Nov 22 12:04:46 crc kubenswrapper[4772]: I1122 12:04:46.982342 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ntzc2" podStartSLOduration=1.982308473 podStartE2EDuration="1.982308473s" podCreationTimestamp="2025-11-22 12:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:46.976863669 +0000 UTC m=+5207.216308163" watchObservedRunningTime="2025-11-22 12:04:46.982308473 +0000 UTC m=+5207.221752967" Nov 22 12:04:48 crc kubenswrapper[4772]: I1122 12:04:48.984944 4772 generic.go:334] "Generic (PLEG): container finished" podID="c080996a-73b3-4f39-a9d8-5912f7d73ab1" containerID="c9837135adc76952a9076ea6373f35b26f1757d8533080c158eb817f367dc6cd" exitCode=0 Nov 22 12:04:48 crc kubenswrapper[4772]: I1122 12:04:48.985043 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ntzc2" event={"ID":"c080996a-73b3-4f39-a9d8-5912f7d73ab1","Type":"ContainerDied","Data":"c9837135adc76952a9076ea6373f35b26f1757d8533080c158eb817f367dc6cd"} Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.330169 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.451636 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-config-data\") pod \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.451860 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4p8g\" (UniqueName: \"kubernetes.io/projected/c080996a-73b3-4f39-a9d8-5912f7d73ab1-kube-api-access-w4p8g\") pod \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.451944 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-combined-ca-bundle\") pod \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\" (UID: \"c080996a-73b3-4f39-a9d8-5912f7d73ab1\") " Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.465325 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c080996a-73b3-4f39-a9d8-5912f7d73ab1-kube-api-access-w4p8g" (OuterVolumeSpecName: "kube-api-access-w4p8g") pod "c080996a-73b3-4f39-a9d8-5912f7d73ab1" (UID: "c080996a-73b3-4f39-a9d8-5912f7d73ab1"). InnerVolumeSpecName "kube-api-access-w4p8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.481257 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c080996a-73b3-4f39-a9d8-5912f7d73ab1" (UID: "c080996a-73b3-4f39-a9d8-5912f7d73ab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.494783 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-config-data" (OuterVolumeSpecName: "config-data") pod "c080996a-73b3-4f39-a9d8-5912f7d73ab1" (UID: "c080996a-73b3-4f39-a9d8-5912f7d73ab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.554098 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4p8g\" (UniqueName: \"kubernetes.io/projected/c080996a-73b3-4f39-a9d8-5912f7d73ab1-kube-api-access-w4p8g\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.554141 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:50 crc kubenswrapper[4772]: I1122 12:04:50.554155 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080996a-73b3-4f39-a9d8-5912f7d73ab1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.007960 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ntzc2" event={"ID":"c080996a-73b3-4f39-a9d8-5912f7d73ab1","Type":"ContainerDied","Data":"85632cff2694929a87e95459d9e54f197376985d875faa26372c142ba5f0b7b6"} Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.008033 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85632cff2694929a87e95459d9e54f197376985d875faa26372c142ba5f0b7b6" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.008064 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ntzc2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.270385 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78fd5f455c-cs7bm"] Nov 22 12:04:51 crc kubenswrapper[4772]: E1122 12:04:51.270846 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c080996a-73b3-4f39-a9d8-5912f7d73ab1" containerName="keystone-db-sync" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.270864 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c080996a-73b3-4f39-a9d8-5912f7d73ab1" containerName="keystone-db-sync" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.271072 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c080996a-73b3-4f39-a9d8-5912f7d73ab1" containerName="keystone-db-sync" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.273056 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.285225 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78fd5f455c-cs7bm"] Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.317249 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b4bb2"] Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.318660 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.322205 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q2nx7" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.322542 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.322723 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.323537 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.330118 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b4bb2"] Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.472677 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-config-data\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.472747 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-credential-keys\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.472777 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-fernet-keys\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.472850 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.472893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-scripts\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.472921 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-nb\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.472999 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqfm\" (UniqueName: \"kubernetes.io/projected/0d506c4b-74c6-436b-a946-daff7ab0f9d6-kube-api-access-jtqfm\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.473175 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstqs\" (UniqueName: \"kubernetes.io/projected/fe5814c6-aab6-41f9-8e03-750d687052e9-kube-api-access-zstqs\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.473201 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-dns-svc\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.473228 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-sb\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.473261 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-combined-ca-bundle\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.575653 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstqs\" (UniqueName: \"kubernetes.io/projected/fe5814c6-aab6-41f9-8e03-750d687052e9-kube-api-access-zstqs\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.575757 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-dns-svc\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.575812 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-sb\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.575855 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-combined-ca-bundle\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.575913 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-config-data\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.575947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-credential-keys\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.575974 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-fernet-keys\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.576103 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.576139 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-scripts\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.576175 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-nb\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.576276 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqfm\" (UniqueName: \"kubernetes.io/projected/0d506c4b-74c6-436b-a946-daff7ab0f9d6-kube-api-access-jtqfm\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.577800 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-dns-svc\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.577815 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-sb\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.578065 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-nb\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.578378 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.583284 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-scripts\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.583394 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-fernet-keys\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.583413 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-config-data\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.583755 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-combined-ca-bundle\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.587559 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-credential-keys\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.600994 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstqs\" (UniqueName: \"kubernetes.io/projected/fe5814c6-aab6-41f9-8e03-750d687052e9-kube-api-access-zstqs\") pod \"keystone-bootstrap-b4bb2\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.601634 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqfm\" (UniqueName: \"kubernetes.io/projected/0d506c4b-74c6-436b-a946-daff7ab0f9d6-kube-api-access-jtqfm\") pod \"dnsmasq-dns-78fd5f455c-cs7bm\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.605285 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:51 crc kubenswrapper[4772]: I1122 12:04:51.637623 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:52 crc kubenswrapper[4772]: I1122 12:04:52.111164 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78fd5f455c-cs7bm"] Nov 22 12:04:52 crc kubenswrapper[4772]: I1122 12:04:52.168421 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b4bb2"] Nov 22 12:04:53 crc kubenswrapper[4772]: I1122 12:04:53.036011 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4bb2" event={"ID":"fe5814c6-aab6-41f9-8e03-750d687052e9","Type":"ContainerStarted","Data":"2fc17e59d25f805c8a89ccc3dd0361a54be00af89cfae8b73bde753e350b467a"} Nov 22 12:04:53 crc kubenswrapper[4772]: I1122 12:04:53.036344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4bb2" event={"ID":"fe5814c6-aab6-41f9-8e03-750d687052e9","Type":"ContainerStarted","Data":"43b7603cc31a76ce2759737f241dd7004cb944a958488c5e5e6bc7abb6453abe"} Nov 22 12:04:53 crc kubenswrapper[4772]: I1122 12:04:53.039700 4772 generic.go:334] "Generic (PLEG): container finished" podID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerID="8438bcedf41aaa01622d40be0c790a688d78795fea580daed8fcf7785ccff3f9" exitCode=0 Nov 22 12:04:53 crc kubenswrapper[4772]: I1122 12:04:53.039750 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" event={"ID":"0d506c4b-74c6-436b-a946-daff7ab0f9d6","Type":"ContainerDied","Data":"8438bcedf41aaa01622d40be0c790a688d78795fea580daed8fcf7785ccff3f9"} Nov 22 12:04:53 crc kubenswrapper[4772]: I1122 12:04:53.039790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" event={"ID":"0d506c4b-74c6-436b-a946-daff7ab0f9d6","Type":"ContainerStarted","Data":"744f9385bb53e0448fc615bfb6bb8e15388a5acf778c5b3bf682196bc7af5edc"} Nov 22 12:04:53 crc kubenswrapper[4772]: I1122 12:04:53.070335 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b4bb2" podStartSLOduration=2.070308648 podStartE2EDuration="2.070308648s" podCreationTimestamp="2025-11-22 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:53.068154635 +0000 UTC m=+5213.307599129" watchObservedRunningTime="2025-11-22 12:04:53.070308648 +0000 UTC m=+5213.309753132" Nov 22 12:04:54 crc kubenswrapper[4772]: I1122 12:04:54.050923 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" event={"ID":"0d506c4b-74c6-436b-a946-daff7ab0f9d6","Type":"ContainerStarted","Data":"bedb43f929604ee7f38dffb65aff73ab5dd64492f0fa0076d6be32dc0ffb9c99"} Nov 22 12:04:54 crc kubenswrapper[4772]: I1122 12:04:54.080903 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" podStartSLOduration=3.080880633 podStartE2EDuration="3.080880633s" podCreationTimestamp="2025-11-22 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:04:54.077245624 +0000 UTC m=+5214.316690138" watchObservedRunningTime="2025-11-22 12:04:54.080880633 +0000 UTC m=+5214.320325137" Nov 22 12:04:55 crc kubenswrapper[4772]: I1122 12:04:55.058709 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:04:56 crc kubenswrapper[4772]: I1122 12:04:56.068077 4772 generic.go:334] "Generic (PLEG): container finished" podID="fe5814c6-aab6-41f9-8e03-750d687052e9" containerID="2fc17e59d25f805c8a89ccc3dd0361a54be00af89cfae8b73bde753e350b467a" exitCode=0 Nov 22 12:04:56 crc kubenswrapper[4772]: I1122 12:04:56.068148 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4bb2" event={"ID":"fe5814c6-aab6-41f9-8e03-750d687052e9","Type":"ContainerDied","Data":"2fc17e59d25f805c8a89ccc3dd0361a54be00af89cfae8b73bde753e350b467a"} Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.428959 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.593698 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstqs\" (UniqueName: \"kubernetes.io/projected/fe5814c6-aab6-41f9-8e03-750d687052e9-kube-api-access-zstqs\") pod \"fe5814c6-aab6-41f9-8e03-750d687052e9\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.593752 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-scripts\") pod \"fe5814c6-aab6-41f9-8e03-750d687052e9\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.593826 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-credential-keys\") pod \"fe5814c6-aab6-41f9-8e03-750d687052e9\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.593853 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-combined-ca-bundle\") pod \"fe5814c6-aab6-41f9-8e03-750d687052e9\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.593951 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-config-data\") pod \"fe5814c6-aab6-41f9-8e03-750d687052e9\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.593995 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-fernet-keys\") pod \"fe5814c6-aab6-41f9-8e03-750d687052e9\" (UID: \"fe5814c6-aab6-41f9-8e03-750d687052e9\") " Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.600248 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fe5814c6-aab6-41f9-8e03-750d687052e9" (UID: "fe5814c6-aab6-41f9-8e03-750d687052e9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.600318 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fe5814c6-aab6-41f9-8e03-750d687052e9" (UID: "fe5814c6-aab6-41f9-8e03-750d687052e9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.600898 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5814c6-aab6-41f9-8e03-750d687052e9-kube-api-access-zstqs" (OuterVolumeSpecName: "kube-api-access-zstqs") pod "fe5814c6-aab6-41f9-8e03-750d687052e9" (UID: "fe5814c6-aab6-41f9-8e03-750d687052e9"). InnerVolumeSpecName "kube-api-access-zstqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.603192 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-scripts" (OuterVolumeSpecName: "scripts") pod "fe5814c6-aab6-41f9-8e03-750d687052e9" (UID: "fe5814c6-aab6-41f9-8e03-750d687052e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.619976 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-config-data" (OuterVolumeSpecName: "config-data") pod "fe5814c6-aab6-41f9-8e03-750d687052e9" (UID: "fe5814c6-aab6-41f9-8e03-750d687052e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.621436 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe5814c6-aab6-41f9-8e03-750d687052e9" (UID: "fe5814c6-aab6-41f9-8e03-750d687052e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.695787 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.695833 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.695846 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zstqs\" (UniqueName: \"kubernetes.io/projected/fe5814c6-aab6-41f9-8e03-750d687052e9-kube-api-access-zstqs\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.695857 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.695867 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:57 crc kubenswrapper[4772]: I1122 12:04:57.695877 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5814c6-aab6-41f9-8e03-750d687052e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.086284 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4bb2" event={"ID":"fe5814c6-aab6-41f9-8e03-750d687052e9","Type":"ContainerDied","Data":"43b7603cc31a76ce2759737f241dd7004cb944a958488c5e5e6bc7abb6453abe"} Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.086325 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b7603cc31a76ce2759737f241dd7004cb944a958488c5e5e6bc7abb6453abe" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.086364 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4bb2" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.177464 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b4bb2"] Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.184718 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b4bb2"] Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.267463 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xlt45"] Nov 22 12:04:58 crc kubenswrapper[4772]: E1122 12:04:58.267903 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5814c6-aab6-41f9-8e03-750d687052e9" containerName="keystone-bootstrap" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.267930 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5814c6-aab6-41f9-8e03-750d687052e9" containerName="keystone-bootstrap" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.268185 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5814c6-aab6-41f9-8e03-750d687052e9" containerName="keystone-bootstrap" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.268836 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.272044 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q2nx7" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.272246 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.272418 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.272605 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.294726 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xlt45"] Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.406296 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-combined-ca-bundle\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.406362 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-scripts\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.406423 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-fernet-keys\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.406449 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-credential-keys\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.406583 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtg8k\" (UniqueName: \"kubernetes.io/projected/e93f9210-92a8-41df-a182-062f47e6cc84-kube-api-access-jtg8k\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.406629 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-config-data\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.508365 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-combined-ca-bundle\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.509743 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-scripts\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.510033 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-fernet-keys\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.510302 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-credential-keys\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.510569 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtg8k\" (UniqueName: \"kubernetes.io/projected/e93f9210-92a8-41df-a182-062f47e6cc84-kube-api-access-jtg8k\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.510858 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-config-data\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.514179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-fernet-keys\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.515848 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-combined-ca-bundle\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.516881 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-scripts\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.518160 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-config-data\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.524215 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-credential-keys\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.531402 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtg8k\" (UniqueName: \"kubernetes.io/projected/e93f9210-92a8-41df-a182-062f47e6cc84-kube-api-access-jtg8k\") pod \"keystone-bootstrap-xlt45\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:58 crc kubenswrapper[4772]: I1122 12:04:58.593814 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:04:59 crc kubenswrapper[4772]: I1122 12:04:59.033018 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xlt45"] Nov 22 12:04:59 crc kubenswrapper[4772]: I1122 12:04:59.105225 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xlt45" event={"ID":"e93f9210-92a8-41df-a182-062f47e6cc84","Type":"ContainerStarted","Data":"25517b24f68b874258460b0d76c35cdcaabd99ec55c0defc66175ed56fdd0fd6"} Nov 22 12:04:59 crc kubenswrapper[4772]: I1122 12:04:59.413553 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:04:59 crc kubenswrapper[4772]: E1122 12:04:59.413860 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:04:59 crc kubenswrapper[4772]: I1122 12:04:59.423723 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5814c6-aab6-41f9-8e03-750d687052e9" path="/var/lib/kubelet/pods/fe5814c6-aab6-41f9-8e03-750d687052e9/volumes" Nov 22 12:05:00 crc kubenswrapper[4772]: I1122 12:05:00.132944 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xlt45" event={"ID":"e93f9210-92a8-41df-a182-062f47e6cc84","Type":"ContainerStarted","Data":"9a486af948b74a472a31bbc90e7f8140dd0056b001c63b20f5a3783a52bf1aa2"} Nov 22 12:05:00 crc kubenswrapper[4772]: I1122 12:05:00.168315 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xlt45" podStartSLOduration=2.168288934 podStartE2EDuration="2.168288934s" podCreationTimestamp="2025-11-22 12:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:05:00.166544421 +0000 UTC m=+5220.405988935" watchObservedRunningTime="2025-11-22 12:05:00.168288934 +0000 UTC m=+5220.407733428" Nov 22 12:05:01 crc kubenswrapper[4772]: I1122 12:05:01.607252 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:05:01 crc kubenswrapper[4772]: I1122 12:05:01.676531 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75d748cc57-4mt57"] Nov 22 12:05:01 crc kubenswrapper[4772]: I1122 12:05:01.676778 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" podUID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerName="dnsmasq-dns" containerID="cri-o://b4b8758e0d60fae224833126fa7861be41daeab8f3f7075378ebb7a58bd59b76" gracePeriod=10 Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.154611 4772 generic.go:334] "Generic (PLEG): container finished" podID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerID="b4b8758e0d60fae224833126fa7861be41daeab8f3f7075378ebb7a58bd59b76" exitCode=0 Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.154653 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" event={"ID":"24373a98-48ad-4a03-a1c0-7b0c366f5a0a","Type":"ContainerDied","Data":"b4b8758e0d60fae224833126fa7861be41daeab8f3f7075378ebb7a58bd59b76"} Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.155012 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" event={"ID":"24373a98-48ad-4a03-a1c0-7b0c366f5a0a","Type":"ContainerDied","Data":"e1bcc1e48309d0a664c754bb804865d75ac1adf28ce0d16c0199e3337cee3b99"} Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.155029 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1bcc1e48309d0a664c754bb804865d75ac1adf28ce0d16c0199e3337cee3b99" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.214863 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.336705 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-sb\") pod \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.336815 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-dns-svc\") pod \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.336865 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-config\") pod \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.336912 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-nb\") pod \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.337543 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kfrp\" (UniqueName: \"kubernetes.io/projected/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-kube-api-access-8kfrp\") pod \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\" (UID: \"24373a98-48ad-4a03-a1c0-7b0c366f5a0a\") " Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.344065 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-kube-api-access-8kfrp" (OuterVolumeSpecName: "kube-api-access-8kfrp") pod "24373a98-48ad-4a03-a1c0-7b0c366f5a0a" (UID: "24373a98-48ad-4a03-a1c0-7b0c366f5a0a"). InnerVolumeSpecName "kube-api-access-8kfrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.383923 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24373a98-48ad-4a03-a1c0-7b0c366f5a0a" (UID: "24373a98-48ad-4a03-a1c0-7b0c366f5a0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.390281 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24373a98-48ad-4a03-a1c0-7b0c366f5a0a" (UID: "24373a98-48ad-4a03-a1c0-7b0c366f5a0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.418788 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-config" (OuterVolumeSpecName: "config") pod "24373a98-48ad-4a03-a1c0-7b0c366f5a0a" (UID: "24373a98-48ad-4a03-a1c0-7b0c366f5a0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.424465 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24373a98-48ad-4a03-a1c0-7b0c366f5a0a" (UID: "24373a98-48ad-4a03-a1c0-7b0c366f5a0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.440194 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.440243 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.440260 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kfrp\" (UniqueName: \"kubernetes.io/projected/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-kube-api-access-8kfrp\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.440272 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:02 crc kubenswrapper[4772]: I1122 12:05:02.440284 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24373a98-48ad-4a03-a1c0-7b0c366f5a0a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:03 crc kubenswrapper[4772]: I1122 12:05:03.165290 4772 generic.go:334] "Generic (PLEG): container finished" podID="e93f9210-92a8-41df-a182-062f47e6cc84" containerID="9a486af948b74a472a31bbc90e7f8140dd0056b001c63b20f5a3783a52bf1aa2" exitCode=0 Nov 22 12:05:03 crc kubenswrapper[4772]: I1122 12:05:03.165363 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xlt45" event={"ID":"e93f9210-92a8-41df-a182-062f47e6cc84","Type":"ContainerDied","Data":"9a486af948b74a472a31bbc90e7f8140dd0056b001c63b20f5a3783a52bf1aa2"} Nov 22 12:05:03 crc kubenswrapper[4772]: I1122 12:05:03.165392 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d748cc57-4mt57" Nov 22 12:05:03 crc kubenswrapper[4772]: I1122 12:05:03.210569 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75d748cc57-4mt57"] Nov 22 12:05:03 crc kubenswrapper[4772]: I1122 12:05:03.215840 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75d748cc57-4mt57"] Nov 22 12:05:03 crc kubenswrapper[4772]: I1122 12:05:03.423764 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" path="/var/lib/kubelet/pods/24373a98-48ad-4a03-a1c0-7b0c366f5a0a/volumes" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.579248 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.685473 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtg8k\" (UniqueName: \"kubernetes.io/projected/e93f9210-92a8-41df-a182-062f47e6cc84-kube-api-access-jtg8k\") pod \"e93f9210-92a8-41df-a182-062f47e6cc84\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.685552 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-fernet-keys\") pod \"e93f9210-92a8-41df-a182-062f47e6cc84\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.685688 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-scripts\") pod \"e93f9210-92a8-41df-a182-062f47e6cc84\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.685783 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-combined-ca-bundle\") pod \"e93f9210-92a8-41df-a182-062f47e6cc84\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.685816 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-config-data\") pod \"e93f9210-92a8-41df-a182-062f47e6cc84\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.685853 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-credential-keys\") pod \"e93f9210-92a8-41df-a182-062f47e6cc84\" (UID: \"e93f9210-92a8-41df-a182-062f47e6cc84\") " Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.702218 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e93f9210-92a8-41df-a182-062f47e6cc84" (UID: "e93f9210-92a8-41df-a182-062f47e6cc84"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.709749 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93f9210-92a8-41df-a182-062f47e6cc84-kube-api-access-jtg8k" (OuterVolumeSpecName: "kube-api-access-jtg8k") pod "e93f9210-92a8-41df-a182-062f47e6cc84" (UID: "e93f9210-92a8-41df-a182-062f47e6cc84"). InnerVolumeSpecName "kube-api-access-jtg8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.711219 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e93f9210-92a8-41df-a182-062f47e6cc84" (UID: "e93f9210-92a8-41df-a182-062f47e6cc84"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.713224 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-scripts" (OuterVolumeSpecName: "scripts") pod "e93f9210-92a8-41df-a182-062f47e6cc84" (UID: "e93f9210-92a8-41df-a182-062f47e6cc84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.750387 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-config-data" (OuterVolumeSpecName: "config-data") pod "e93f9210-92a8-41df-a182-062f47e6cc84" (UID: "e93f9210-92a8-41df-a182-062f47e6cc84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.753193 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e93f9210-92a8-41df-a182-062f47e6cc84" (UID: "e93f9210-92a8-41df-a182-062f47e6cc84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.791395 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.791469 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.791486 4772 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.791502 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtg8k\" (UniqueName: \"kubernetes.io/projected/e93f9210-92a8-41df-a182-062f47e6cc84-kube-api-access-jtg8k\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.791518 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:04 crc kubenswrapper[4772]: I1122 12:05:04.791580 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93f9210-92a8-41df-a182-062f47e6cc84-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.185587 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xlt45" event={"ID":"e93f9210-92a8-41df-a182-062f47e6cc84","Type":"ContainerDied","Data":"25517b24f68b874258460b0d76c35cdcaabd99ec55c0defc66175ed56fdd0fd6"} Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.185651 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xlt45" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.185674 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25517b24f68b874258460b0d76c35cdcaabd99ec55c0defc66175ed56fdd0fd6" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.287284 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-856c54c998-qr6df"] Nov 22 12:05:05 crc kubenswrapper[4772]: E1122 12:05:05.287710 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerName="dnsmasq-dns" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.287739 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerName="dnsmasq-dns" Nov 22 12:05:05 crc kubenswrapper[4772]: E1122 12:05:05.287785 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93f9210-92a8-41df-a182-062f47e6cc84" containerName="keystone-bootstrap" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.287795 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93f9210-92a8-41df-a182-062f47e6cc84" containerName="keystone-bootstrap" Nov 22 12:05:05 crc kubenswrapper[4772]: E1122 12:05:05.287806 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerName="init" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.287814 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerName="init" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.287999 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93f9210-92a8-41df-a182-062f47e6cc84" containerName="keystone-bootstrap" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.288040 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="24373a98-48ad-4a03-a1c0-7b0c366f5a0a" containerName="dnsmasq-dns" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.288837 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.291206 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.291502 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q2nx7" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.292793 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.297727 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.299797 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-856c54c998-qr6df"] Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.399852 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-fernet-keys\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.399920 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-credential-keys\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.399965 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-scripts\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.399983 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-combined-ca-bundle\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.400001 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq7wg\" (UniqueName: \"kubernetes.io/projected/abe96e9e-94c7-43ba-b37d-bec6a589c004-kube-api-access-nq7wg\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.400028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-config-data\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.501953 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-credential-keys\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.502030 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-scripts\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.502071 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-combined-ca-bundle\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.502097 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq7wg\" (UniqueName: \"kubernetes.io/projected/abe96e9e-94c7-43ba-b37d-bec6a589c004-kube-api-access-nq7wg\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.502119 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-config-data\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.502181 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-fernet-keys\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.506850 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-credential-keys\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.506960 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-scripts\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.507087 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-config-data\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.507840 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-combined-ca-bundle\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.516149 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/abe96e9e-94c7-43ba-b37d-bec6a589c004-fernet-keys\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.525688 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq7wg\" (UniqueName: \"kubernetes.io/projected/abe96e9e-94c7-43ba-b37d-bec6a589c004-kube-api-access-nq7wg\") pod \"keystone-856c54c998-qr6df\" (UID: \"abe96e9e-94c7-43ba-b37d-bec6a589c004\") " pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:05 crc kubenswrapper[4772]: I1122 12:05:05.610914 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:06 crc kubenswrapper[4772]: I1122 12:05:06.052226 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-856c54c998-qr6df"] Nov 22 12:05:06 crc kubenswrapper[4772]: I1122 12:05:06.194032 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-856c54c998-qr6df" event={"ID":"abe96e9e-94c7-43ba-b37d-bec6a589c004","Type":"ContainerStarted","Data":"07a97e7c49e486696448956e891fbfb5100b433283c435f78d0e5b351b7474b0"} Nov 22 12:05:07 crc kubenswrapper[4772]: I1122 12:05:07.219573 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-856c54c998-qr6df" event={"ID":"abe96e9e-94c7-43ba-b37d-bec6a589c004","Type":"ContainerStarted","Data":"9b7a6ef71cea424cbe3667c0b9b6bc82701bbed7f03aab29e9ec3471c548ed58"} Nov 22 12:05:07 crc kubenswrapper[4772]: I1122 12:05:07.221174 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:07 crc kubenswrapper[4772]: I1122 12:05:07.242532 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-856c54c998-qr6df" podStartSLOduration=2.242513336 podStartE2EDuration="2.242513336s" podCreationTimestamp="2025-11-22 12:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:05:07.236249633 +0000 UTC m=+5227.475694147" watchObservedRunningTime="2025-11-22 12:05:07.242513336 +0000 UTC m=+5227.481957850" Nov 22 12:05:10 crc kubenswrapper[4772]: I1122 12:05:10.413811 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:05:10 crc kubenswrapper[4772]: E1122 12:05:10.414563 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:05:10 crc kubenswrapper[4772]: I1122 12:05:10.978575 4772 scope.go:117] "RemoveContainer" containerID="c2ab57c690078e98e08f619738ab821d658a520e461a345900964104e83ad7ac" Nov 22 12:05:11 crc kubenswrapper[4772]: I1122 12:05:11.001491 4772 scope.go:117] "RemoveContainer" containerID="382f5dc50358e4c4c2f7c13ad7bf635d236e3d34fffab7c5642d0c52189fdb7f" Nov 22 12:05:11 crc kubenswrapper[4772]: I1122 12:05:11.035601 4772 scope.go:117] "RemoveContainer" containerID="40dcc86ebd8ece0a166a125b109c4027f3aea2e6137ecca0ae1e285c0a3d0a67" Nov 22 12:05:11 crc kubenswrapper[4772]: I1122 12:05:11.075668 4772 scope.go:117] "RemoveContainer" containerID="dc2efb42d7ab7681aba3fe33932da0a94bc10e6dfb84a4ee4210f83d99652036" Nov 22 12:05:11 crc kubenswrapper[4772]: I1122 12:05:11.104209 4772 scope.go:117] "RemoveContainer" containerID="8c9e4b52463696717e66a5609483be6a6703f7c6cbca99b848045f5fcf7735dc" Nov 22 12:05:24 crc kubenswrapper[4772]: I1122 12:05:24.414657 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:05:24 crc kubenswrapper[4772]: E1122 12:05:24.415572 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:05:37 crc kubenswrapper[4772]: I1122 12:05:37.183481 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-856c54c998-qr6df" Nov 22 12:05:38 crc kubenswrapper[4772]: I1122 12:05:38.414152 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:05:38 crc kubenswrapper[4772]: E1122 12:05:38.414667 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.231400 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.233466 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.236693 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.238761 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.239367 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-rml5m" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.248599 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.262349 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xl5b\" (UniqueName: \"kubernetes.io/projected/242432e7-1b75-4aeb-9ea7-c8c790f242a9-kube-api-access-8xl5b\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.262430 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.262496 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.364524 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.365423 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.365884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xl5b\" (UniqueName: \"kubernetes.io/projected/242432e7-1b75-4aeb-9ea7-c8c790f242a9-kube-api-access-8xl5b\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.367577 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.385837 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.389976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xl5b\" (UniqueName: \"kubernetes.io/projected/242432e7-1b75-4aeb-9ea7-c8c790f242a9-kube-api-access-8xl5b\") pod \"openstackclient\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " pod="openstack/openstackclient" Nov 22 12:05:40 crc kubenswrapper[4772]: I1122 12:05:40.569668 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 12:05:41 crc kubenswrapper[4772]: I1122 12:05:41.079002 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 12:05:41 crc kubenswrapper[4772]: I1122 12:05:41.538598 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"242432e7-1b75-4aeb-9ea7-c8c790f242a9","Type":"ContainerStarted","Data":"e59d13e4a70c95777103cbe263f159518f98532e57de53b8c01e007e1c65a7a0"} Nov 22 12:05:41 crc kubenswrapper[4772]: I1122 12:05:41.538682 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"242432e7-1b75-4aeb-9ea7-c8c790f242a9","Type":"ContainerStarted","Data":"d2a8b104477cfacdaabb492ccbe81ac96d12ce957e5f83b63ddaf21cd49a60b2"} Nov 22 12:05:41 crc kubenswrapper[4772]: I1122 12:05:41.562216 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.562190403 podStartE2EDuration="1.562190403s" podCreationTimestamp="2025-11-22 12:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:05:41.560404639 +0000 UTC m=+5261.799849143" watchObservedRunningTime="2025-11-22 12:05:41.562190403 +0000 UTC m=+5261.801634907" Nov 22 12:05:53 crc kubenswrapper[4772]: I1122 12:05:53.414452 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:05:53 crc kubenswrapper[4772]: E1122 12:05:53.416263 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.517177 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7kdq"] Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.519470 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.545148 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7kdq"] Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.589142 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-utilities\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.589197 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-catalog-content\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.589465 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnsjl\" (UniqueName: \"kubernetes.io/projected/ca40b09c-6a21-4d64-afcb-3d510b26a60b-kube-api-access-fnsjl\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.691899 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-utilities\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.692449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-catalog-content\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.692534 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnsjl\" (UniqueName: \"kubernetes.io/projected/ca40b09c-6a21-4d64-afcb-3d510b26a60b-kube-api-access-fnsjl\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.693404 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-utilities\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.693678 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-catalog-content\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.713689 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnsjl\" (UniqueName: \"kubernetes.io/projected/ca40b09c-6a21-4d64-afcb-3d510b26a60b-kube-api-access-fnsjl\") pod \"redhat-operators-b7kdq\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:05 crc kubenswrapper[4772]: I1122 12:06:05.853707 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:06 crc kubenswrapper[4772]: I1122 12:06:06.368461 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7kdq"] Nov 22 12:06:06 crc kubenswrapper[4772]: I1122 12:06:06.811389 4772 generic.go:334] "Generic (PLEG): container finished" podID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerID="4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369" exitCode=0 Nov 22 12:06:06 crc kubenswrapper[4772]: I1122 12:06:06.811489 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7kdq" event={"ID":"ca40b09c-6a21-4d64-afcb-3d510b26a60b","Type":"ContainerDied","Data":"4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369"} Nov 22 12:06:06 crc kubenswrapper[4772]: I1122 12:06:06.811860 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7kdq" event={"ID":"ca40b09c-6a21-4d64-afcb-3d510b26a60b","Type":"ContainerStarted","Data":"5e0f43b8ccc97d31ee0a57f8dc8859d1c504c1bd94a5d3ebbffbb83d27dfa546"} Nov 22 12:06:07 crc kubenswrapper[4772]: I1122 12:06:07.824974 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7kdq" event={"ID":"ca40b09c-6a21-4d64-afcb-3d510b26a60b","Type":"ContainerStarted","Data":"fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6"} Nov 22 12:06:08 crc kubenswrapper[4772]: I1122 12:06:08.414818 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:06:08 crc kubenswrapper[4772]: E1122 12:06:08.415324 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:06:08 crc kubenswrapper[4772]: I1122 12:06:08.836078 4772 generic.go:334] "Generic (PLEG): container finished" podID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerID="fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6" exitCode=0 Nov 22 12:06:08 crc kubenswrapper[4772]: I1122 12:06:08.836180 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7kdq" event={"ID":"ca40b09c-6a21-4d64-afcb-3d510b26a60b","Type":"ContainerDied","Data":"fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6"} Nov 22 12:06:09 crc kubenswrapper[4772]: I1122 12:06:09.850688 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7kdq" event={"ID":"ca40b09c-6a21-4d64-afcb-3d510b26a60b","Type":"ContainerStarted","Data":"1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377"} Nov 22 12:06:09 crc kubenswrapper[4772]: I1122 12:06:09.875402 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7kdq" podStartSLOduration=2.352062459 podStartE2EDuration="4.875381255s" podCreationTimestamp="2025-11-22 12:06:05 +0000 UTC" firstStartedPulling="2025-11-22 12:06:06.813933249 +0000 UTC m=+5287.053377743" lastFinishedPulling="2025-11-22 12:06:09.337252045 +0000 UTC m=+5289.576696539" observedRunningTime="2025-11-22 12:06:09.869918401 +0000 UTC m=+5290.109362895" watchObservedRunningTime="2025-11-22 12:06:09.875381255 +0000 UTC m=+5290.114825749" Nov 22 12:06:15 crc kubenswrapper[4772]: I1122 12:06:15.855126 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:15 crc kubenswrapper[4772]: I1122 12:06:15.857509 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:15 crc kubenswrapper[4772]: I1122 12:06:15.903084 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:15 crc kubenswrapper[4772]: I1122 12:06:15.955130 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:16 crc kubenswrapper[4772]: I1122 12:06:16.139571 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7kdq"] Nov 22 12:06:17 crc kubenswrapper[4772]: I1122 12:06:17.924265 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b7kdq" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="registry-server" containerID="cri-o://1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377" gracePeriod=2 Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.439493 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.515307 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-catalog-content\") pod \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.515442 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnsjl\" (UniqueName: \"kubernetes.io/projected/ca40b09c-6a21-4d64-afcb-3d510b26a60b-kube-api-access-fnsjl\") pod \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.515490 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-utilities\") pod \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\" (UID: \"ca40b09c-6a21-4d64-afcb-3d510b26a60b\") " Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.516591 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-utilities" (OuterVolumeSpecName: "utilities") pod "ca40b09c-6a21-4d64-afcb-3d510b26a60b" (UID: "ca40b09c-6a21-4d64-afcb-3d510b26a60b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.528253 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca40b09c-6a21-4d64-afcb-3d510b26a60b-kube-api-access-fnsjl" (OuterVolumeSpecName: "kube-api-access-fnsjl") pod "ca40b09c-6a21-4d64-afcb-3d510b26a60b" (UID: "ca40b09c-6a21-4d64-afcb-3d510b26a60b"). InnerVolumeSpecName "kube-api-access-fnsjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.616732 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnsjl\" (UniqueName: \"kubernetes.io/projected/ca40b09c-6a21-4d64-afcb-3d510b26a60b-kube-api-access-fnsjl\") on node \"crc\" DevicePath \"\"" Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.616767 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.941155 4772 generic.go:334] "Generic (PLEG): container finished" podID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerID="1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377" exitCode=0 Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.941254 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7kdq" Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.941276 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7kdq" event={"ID":"ca40b09c-6a21-4d64-afcb-3d510b26a60b","Type":"ContainerDied","Data":"1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377"} Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.941347 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7kdq" event={"ID":"ca40b09c-6a21-4d64-afcb-3d510b26a60b","Type":"ContainerDied","Data":"5e0f43b8ccc97d31ee0a57f8dc8859d1c504c1bd94a5d3ebbffbb83d27dfa546"} Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.941395 4772 scope.go:117] "RemoveContainer" containerID="1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377" Nov 22 12:06:18 crc kubenswrapper[4772]: I1122 12:06:18.980751 4772 scope.go:117] "RemoveContainer" containerID="fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.016145 4772 scope.go:117] "RemoveContainer" containerID="4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.088320 4772 scope.go:117] "RemoveContainer" containerID="1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377" Nov 22 12:06:19 crc kubenswrapper[4772]: E1122 12:06:19.090404 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377\": container with ID starting with 1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377 not found: ID does not exist" containerID="1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.090469 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377"} err="failed to get container status \"1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377\": rpc error: code = NotFound desc = could not find container \"1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377\": container with ID starting with 1306d96c7a8ca8fcd8101b210ee867fe261b0451ed6d95da78d8f19e864ec377 not found: ID does not exist" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.090506 4772 scope.go:117] "RemoveContainer" containerID="fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6" Nov 22 12:06:19 crc kubenswrapper[4772]: E1122 12:06:19.091394 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6\": container with ID starting with fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6 not found: ID does not exist" containerID="fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.091455 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6"} err="failed to get container status \"fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6\": rpc error: code = NotFound desc = could not find container \"fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6\": container with ID starting with fb81c863f0c41732169fd38bd6faa320173632fea2d9b7952388a0d57f03aac6 not found: ID does not exist" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.091482 4772 scope.go:117] "RemoveContainer" containerID="4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369" Nov 22 12:06:19 crc kubenswrapper[4772]: E1122 12:06:19.095643 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369\": container with ID starting with 4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369 not found: ID does not exist" containerID="4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.095684 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369"} err="failed to get container status \"4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369\": rpc error: code = NotFound desc = could not find container \"4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369\": container with ID starting with 4708cdc92b78b4ab08e9ba716d97f28b42d3aa1ae521caba9aa0d498cbb59369 not found: ID does not exist" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.767716 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca40b09c-6a21-4d64-afcb-3d510b26a60b" (UID: "ca40b09c-6a21-4d64-afcb-3d510b26a60b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.843865 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca40b09c-6a21-4d64-afcb-3d510b26a60b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.893921 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7kdq"] Nov 22 12:06:19 crc kubenswrapper[4772]: I1122 12:06:19.907091 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b7kdq"] Nov 22 12:06:21 crc kubenswrapper[4772]: I1122 12:06:21.418816 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:06:21 crc kubenswrapper[4772]: E1122 12:06:21.419578 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:06:21 crc kubenswrapper[4772]: I1122 12:06:21.438435 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" path="/var/lib/kubelet/pods/ca40b09c-6a21-4d64-afcb-3d510b26a60b/volumes" Nov 22 12:06:28 crc kubenswrapper[4772]: E1122 12:06:28.497590 4772 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.150:44088->38.102.83.150:44525: write tcp 38.102.83.150:44088->38.102.83.150:44525: write: broken pipe Nov 22 12:06:34 crc kubenswrapper[4772]: I1122 12:06:34.414458 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:06:34 crc kubenswrapper[4772]: E1122 12:06:34.415248 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:06:45 crc kubenswrapper[4772]: I1122 12:06:45.414559 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:06:45 crc kubenswrapper[4772]: E1122 12:06:45.415461 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:06:56 crc kubenswrapper[4772]: I1122 12:06:56.414766 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:06:56 crc kubenswrapper[4772]: E1122 12:06:56.416523 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:07:08 crc kubenswrapper[4772]: I1122 12:07:08.413925 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:07:08 crc kubenswrapper[4772]: E1122 12:07:08.414780 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:07:19 crc kubenswrapper[4772]: I1122 12:07:19.417951 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:07:19 crc kubenswrapper[4772]: E1122 12:07:19.419026 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:07:21 crc kubenswrapper[4772]: I1122 12:07:21.929229 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7k2t5"] Nov 22 12:07:21 crc kubenswrapper[4772]: E1122 12:07:21.929988 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="extract-utilities" Nov 22 12:07:21 crc kubenswrapper[4772]: I1122 12:07:21.930008 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="extract-utilities" Nov 22 12:07:21 crc kubenswrapper[4772]: E1122 12:07:21.930033 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="extract-content" Nov 22 12:07:21 crc kubenswrapper[4772]: I1122 12:07:21.930061 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="extract-content" Nov 22 12:07:21 crc kubenswrapper[4772]: E1122 12:07:21.930090 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="registry-server" Nov 22 12:07:21 crc kubenswrapper[4772]: I1122 12:07:21.930102 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="registry-server" Nov 22 12:07:21 crc kubenswrapper[4772]: I1122 12:07:21.930307 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca40b09c-6a21-4d64-afcb-3d510b26a60b" containerName="registry-server" Nov 22 12:07:21 crc kubenswrapper[4772]: I1122 12:07:21.930995 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7k2t5" Nov 22 12:07:21 crc kubenswrapper[4772]: I1122 12:07:21.936527 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7k2t5"] Nov 22 12:07:22 crc kubenswrapper[4772]: I1122 12:07:22.025172 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8wb\" (UniqueName: \"kubernetes.io/projected/0ba87c6f-733e-4710-8b04-8267444835a7-kube-api-access-fv8wb\") pod \"barbican-db-create-7k2t5\" (UID: \"0ba87c6f-733e-4710-8b04-8267444835a7\") " pod="openstack/barbican-db-create-7k2t5" Nov 22 12:07:22 crc kubenswrapper[4772]: I1122 12:07:22.129086 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8wb\" (UniqueName: \"kubernetes.io/projected/0ba87c6f-733e-4710-8b04-8267444835a7-kube-api-access-fv8wb\") pod \"barbican-db-create-7k2t5\" (UID: \"0ba87c6f-733e-4710-8b04-8267444835a7\") " pod="openstack/barbican-db-create-7k2t5" Nov 22 12:07:22 crc kubenswrapper[4772]: I1122 12:07:22.161878 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8wb\" (UniqueName: \"kubernetes.io/projected/0ba87c6f-733e-4710-8b04-8267444835a7-kube-api-access-fv8wb\") pod \"barbican-db-create-7k2t5\" (UID: \"0ba87c6f-733e-4710-8b04-8267444835a7\") " pod="openstack/barbican-db-create-7k2t5" Nov 22 12:07:22 crc kubenswrapper[4772]: I1122 12:07:22.261178 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7k2t5" Nov 22 12:07:22 crc kubenswrapper[4772]: I1122 12:07:22.765812 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7k2t5"] Nov 22 12:07:23 crc kubenswrapper[4772]: I1122 12:07:23.556243 4772 generic.go:334] "Generic (PLEG): container finished" podID="0ba87c6f-733e-4710-8b04-8267444835a7" containerID="918f194511c5a999506f485098d5bb59055b93a4deb243d856bab5f44a1bba2c" exitCode=0 Nov 22 12:07:23 crc kubenswrapper[4772]: I1122 12:07:23.556324 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7k2t5" event={"ID":"0ba87c6f-733e-4710-8b04-8267444835a7","Type":"ContainerDied","Data":"918f194511c5a999506f485098d5bb59055b93a4deb243d856bab5f44a1bba2c"} Nov 22 12:07:23 crc kubenswrapper[4772]: I1122 12:07:23.556383 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7k2t5" event={"ID":"0ba87c6f-733e-4710-8b04-8267444835a7","Type":"ContainerStarted","Data":"1747b0b14d4b5b2e52ea6a2ca415754e7384d0abf2662309d2d6c988b52d95fc"} Nov 22 12:07:24 crc kubenswrapper[4772]: I1122 12:07:24.907289 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7k2t5" Nov 22 12:07:25 crc kubenswrapper[4772]: I1122 12:07:25.072265 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv8wb\" (UniqueName: \"kubernetes.io/projected/0ba87c6f-733e-4710-8b04-8267444835a7-kube-api-access-fv8wb\") pod \"0ba87c6f-733e-4710-8b04-8267444835a7\" (UID: \"0ba87c6f-733e-4710-8b04-8267444835a7\") " Nov 22 12:07:25 crc kubenswrapper[4772]: I1122 12:07:25.078977 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba87c6f-733e-4710-8b04-8267444835a7-kube-api-access-fv8wb" (OuterVolumeSpecName: "kube-api-access-fv8wb") pod "0ba87c6f-733e-4710-8b04-8267444835a7" (UID: "0ba87c6f-733e-4710-8b04-8267444835a7"). InnerVolumeSpecName "kube-api-access-fv8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:07:25 crc kubenswrapper[4772]: I1122 12:07:25.174847 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv8wb\" (UniqueName: \"kubernetes.io/projected/0ba87c6f-733e-4710-8b04-8267444835a7-kube-api-access-fv8wb\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:25 crc kubenswrapper[4772]: I1122 12:07:25.574256 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7k2t5" event={"ID":"0ba87c6f-733e-4710-8b04-8267444835a7","Type":"ContainerDied","Data":"1747b0b14d4b5b2e52ea6a2ca415754e7384d0abf2662309d2d6c988b52d95fc"} Nov 22 12:07:25 crc kubenswrapper[4772]: I1122 12:07:25.574828 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1747b0b14d4b5b2e52ea6a2ca415754e7384d0abf2662309d2d6c988b52d95fc" Nov 22 12:07:25 crc kubenswrapper[4772]: I1122 12:07:25.574329 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7k2t5" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.057607 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-06a5-account-create-pvmft"] Nov 22 12:07:32 crc kubenswrapper[4772]: E1122 12:07:32.058544 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba87c6f-733e-4710-8b04-8267444835a7" containerName="mariadb-database-create" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.058561 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba87c6f-733e-4710-8b04-8267444835a7" containerName="mariadb-database-create" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.058768 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba87c6f-733e-4710-8b04-8267444835a7" containerName="mariadb-database-create" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.059434 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-06a5-account-create-pvmft" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.062810 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.084013 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-06a5-account-create-pvmft"] Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.218514 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdspn\" (UniqueName: \"kubernetes.io/projected/e18b01d3-2ec1-46a0-b5ee-bc9addebed19-kube-api-access-gdspn\") pod \"barbican-06a5-account-create-pvmft\" (UID: \"e18b01d3-2ec1-46a0-b5ee-bc9addebed19\") " pod="openstack/barbican-06a5-account-create-pvmft" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.320744 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdspn\" (UniqueName: \"kubernetes.io/projected/e18b01d3-2ec1-46a0-b5ee-bc9addebed19-kube-api-access-gdspn\") pod \"barbican-06a5-account-create-pvmft\" (UID: \"e18b01d3-2ec1-46a0-b5ee-bc9addebed19\") " pod="openstack/barbican-06a5-account-create-pvmft" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.355781 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdspn\" (UniqueName: \"kubernetes.io/projected/e18b01d3-2ec1-46a0-b5ee-bc9addebed19-kube-api-access-gdspn\") pod \"barbican-06a5-account-create-pvmft\" (UID: \"e18b01d3-2ec1-46a0-b5ee-bc9addebed19\") " pod="openstack/barbican-06a5-account-create-pvmft" Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.385944 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-06a5-account-create-pvmft" Nov 22 12:07:32 crc kubenswrapper[4772]: W1122 12:07:32.866673 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode18b01d3_2ec1_46a0_b5ee_bc9addebed19.slice/crio-5c379f1924bff3a2ca9187867400701ccc3796c855be99a3e02a588a9789c304 WatchSource:0}: Error finding container 5c379f1924bff3a2ca9187867400701ccc3796c855be99a3e02a588a9789c304: Status 404 returned error can't find the container with id 5c379f1924bff3a2ca9187867400701ccc3796c855be99a3e02a588a9789c304 Nov 22 12:07:32 crc kubenswrapper[4772]: I1122 12:07:32.868099 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-06a5-account-create-pvmft"] Nov 22 12:07:33 crc kubenswrapper[4772]: I1122 12:07:33.653385 4772 generic.go:334] "Generic (PLEG): container finished" podID="e18b01d3-2ec1-46a0-b5ee-bc9addebed19" containerID="81e4eec4c2d9c6f6b41b8631a4e80d7dc2b5afbdc4a39c0158d19dbffdeb9568" exitCode=0 Nov 22 12:07:33 crc kubenswrapper[4772]: I1122 12:07:33.653465 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-06a5-account-create-pvmft" event={"ID":"e18b01d3-2ec1-46a0-b5ee-bc9addebed19","Type":"ContainerDied","Data":"81e4eec4c2d9c6f6b41b8631a4e80d7dc2b5afbdc4a39c0158d19dbffdeb9568"} Nov 22 12:07:33 crc kubenswrapper[4772]: I1122 12:07:33.653693 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-06a5-account-create-pvmft" event={"ID":"e18b01d3-2ec1-46a0-b5ee-bc9addebed19","Type":"ContainerStarted","Data":"5c379f1924bff3a2ca9187867400701ccc3796c855be99a3e02a588a9789c304"} Nov 22 12:07:34 crc kubenswrapper[4772]: I1122 12:07:34.414447 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:07:34 crc kubenswrapper[4772]: E1122 12:07:34.415082 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:07:35 crc kubenswrapper[4772]: I1122 12:07:35.022586 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-06a5-account-create-pvmft" Nov 22 12:07:35 crc kubenswrapper[4772]: I1122 12:07:35.174017 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdspn\" (UniqueName: \"kubernetes.io/projected/e18b01d3-2ec1-46a0-b5ee-bc9addebed19-kube-api-access-gdspn\") pod \"e18b01d3-2ec1-46a0-b5ee-bc9addebed19\" (UID: \"e18b01d3-2ec1-46a0-b5ee-bc9addebed19\") " Nov 22 12:07:35 crc kubenswrapper[4772]: I1122 12:07:35.180779 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18b01d3-2ec1-46a0-b5ee-bc9addebed19-kube-api-access-gdspn" (OuterVolumeSpecName: "kube-api-access-gdspn") pod "e18b01d3-2ec1-46a0-b5ee-bc9addebed19" (UID: "e18b01d3-2ec1-46a0-b5ee-bc9addebed19"). InnerVolumeSpecName "kube-api-access-gdspn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:07:35 crc kubenswrapper[4772]: I1122 12:07:35.276017 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdspn\" (UniqueName: \"kubernetes.io/projected/e18b01d3-2ec1-46a0-b5ee-bc9addebed19-kube-api-access-gdspn\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:35 crc kubenswrapper[4772]: I1122 12:07:35.672742 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-06a5-account-create-pvmft" event={"ID":"e18b01d3-2ec1-46a0-b5ee-bc9addebed19","Type":"ContainerDied","Data":"5c379f1924bff3a2ca9187867400701ccc3796c855be99a3e02a588a9789c304"} Nov 22 12:07:35 crc kubenswrapper[4772]: I1122 12:07:35.672799 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c379f1924bff3a2ca9187867400701ccc3796c855be99a3e02a588a9789c304" Nov 22 12:07:35 crc kubenswrapper[4772]: I1122 12:07:35.672812 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-06a5-account-create-pvmft" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.267762 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6znrw"] Nov 22 12:07:37 crc kubenswrapper[4772]: E1122 12:07:37.268610 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18b01d3-2ec1-46a0-b5ee-bc9addebed19" containerName="mariadb-account-create" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.268627 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18b01d3-2ec1-46a0-b5ee-bc9addebed19" containerName="mariadb-account-create" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.268848 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18b01d3-2ec1-46a0-b5ee-bc9addebed19" containerName="mariadb-account-create" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.269592 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.271642 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q7j5x" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.271928 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.285258 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6znrw"] Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.422173 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-combined-ca-bundle\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.422353 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6hk\" (UniqueName: \"kubernetes.io/projected/22e758ed-9d0e-4359-bc0b-902316c12923-kube-api-access-lj6hk\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.422393 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-db-sync-config-data\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.524284 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-combined-ca-bundle\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.524477 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6hk\" (UniqueName: \"kubernetes.io/projected/22e758ed-9d0e-4359-bc0b-902316c12923-kube-api-access-lj6hk\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.524499 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-db-sync-config-data\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.529190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-db-sync-config-data\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.542652 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-combined-ca-bundle\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.547240 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6hk\" (UniqueName: \"kubernetes.io/projected/22e758ed-9d0e-4359-bc0b-902316c12923-kube-api-access-lj6hk\") pod \"barbican-db-sync-6znrw\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:37 crc kubenswrapper[4772]: I1122 12:07:37.588032 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:38 crc kubenswrapper[4772]: I1122 12:07:38.016188 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6znrw"] Nov 22 12:07:38 crc kubenswrapper[4772]: I1122 12:07:38.714918 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6znrw" event={"ID":"22e758ed-9d0e-4359-bc0b-902316c12923","Type":"ContainerStarted","Data":"f98700b08e229bfd45b36fb192cbf1cde9e4c1e819782f90ea46f5cf72735b28"} Nov 22 12:07:38 crc kubenswrapper[4772]: I1122 12:07:38.715243 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6znrw" event={"ID":"22e758ed-9d0e-4359-bc0b-902316c12923","Type":"ContainerStarted","Data":"e177e567dc5121acf196207f97881c7b846e2dd4f05d2e5899dd15332e4d32ce"} Nov 22 12:07:38 crc kubenswrapper[4772]: I1122 12:07:38.735040 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6znrw" podStartSLOduration=1.735021495 podStartE2EDuration="1.735021495s" podCreationTimestamp="2025-11-22 12:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:07:38.728532843 +0000 UTC m=+5378.967977327" watchObservedRunningTime="2025-11-22 12:07:38.735021495 +0000 UTC m=+5378.974465989" Nov 22 12:07:39 crc kubenswrapper[4772]: I1122 12:07:39.724978 4772 generic.go:334] "Generic (PLEG): container finished" podID="22e758ed-9d0e-4359-bc0b-902316c12923" containerID="f98700b08e229bfd45b36fb192cbf1cde9e4c1e819782f90ea46f5cf72735b28" exitCode=0 Nov 22 12:07:39 crc kubenswrapper[4772]: I1122 12:07:39.725021 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6znrw" event={"ID":"22e758ed-9d0e-4359-bc0b-902316c12923","Type":"ContainerDied","Data":"f98700b08e229bfd45b36fb192cbf1cde9e4c1e819782f90ea46f5cf72735b28"} Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.166997 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.189075 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-combined-ca-bundle\") pod \"22e758ed-9d0e-4359-bc0b-902316c12923\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.189371 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj6hk\" (UniqueName: \"kubernetes.io/projected/22e758ed-9d0e-4359-bc0b-902316c12923-kube-api-access-lj6hk\") pod \"22e758ed-9d0e-4359-bc0b-902316c12923\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.189567 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-db-sync-config-data\") pod \"22e758ed-9d0e-4359-bc0b-902316c12923\" (UID: \"22e758ed-9d0e-4359-bc0b-902316c12923\") " Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.194838 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "22e758ed-9d0e-4359-bc0b-902316c12923" (UID: "22e758ed-9d0e-4359-bc0b-902316c12923"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.195597 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e758ed-9d0e-4359-bc0b-902316c12923-kube-api-access-lj6hk" (OuterVolumeSpecName: "kube-api-access-lj6hk") pod "22e758ed-9d0e-4359-bc0b-902316c12923" (UID: "22e758ed-9d0e-4359-bc0b-902316c12923"). InnerVolumeSpecName "kube-api-access-lj6hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.217475 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22e758ed-9d0e-4359-bc0b-902316c12923" (UID: "22e758ed-9d0e-4359-bc0b-902316c12923"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.292411 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj6hk\" (UniqueName: \"kubernetes.io/projected/22e758ed-9d0e-4359-bc0b-902316c12923-kube-api-access-lj6hk\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.292450 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.292459 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e758ed-9d0e-4359-bc0b-902316c12923-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.745481 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6znrw" event={"ID":"22e758ed-9d0e-4359-bc0b-902316c12923","Type":"ContainerDied","Data":"e177e567dc5121acf196207f97881c7b846e2dd4f05d2e5899dd15332e4d32ce"} Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.746247 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e177e567dc5121acf196207f97881c7b846e2dd4f05d2e5899dd15332e4d32ce" Nov 22 12:07:41 crc kubenswrapper[4772]: I1122 12:07:41.745805 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6znrw" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.002506 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59dc6657f-22jc6"] Nov 22 12:07:42 crc kubenswrapper[4772]: E1122 12:07:42.003713 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e758ed-9d0e-4359-bc0b-902316c12923" containerName="barbican-db-sync" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.003813 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e758ed-9d0e-4359-bc0b-902316c12923" containerName="barbican-db-sync" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.004162 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e758ed-9d0e-4359-bc0b-902316c12923" containerName="barbican-db-sync" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.005425 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.009945 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.010596 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.010789 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q7j5x" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.019203 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59dc6657f-22jc6"] Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.090366 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fbcf576c7-98xt4"] Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.091974 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.106465 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-config-data-custom\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.106993 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-combined-ca-bundle\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.107187 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-config-data\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.107349 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7csx\" (UniqueName: \"kubernetes.io/projected/edbde106-63a1-4a4b-91af-bb723902586e-kube-api-access-g7csx\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.107518 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbde106-63a1-4a4b-91af-bb723902586e-logs\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.107942 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbcf576c7-98xt4"] Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.133251 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-78cbb88bb-tbns2"] Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.135252 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.141585 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.169166 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78cbb88bb-tbns2"] Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212025 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8h6c\" (UniqueName: \"kubernetes.io/projected/3f82a0c0-0894-4267-8031-cb302fee5976-kube-api-access-c8h6c\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212113 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-config-data-custom\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212141 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-dns-svc\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212159 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-combined-ca-bundle\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212190 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-config-data\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212227 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212245 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-config\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212280 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7csx\" (UniqueName: \"kubernetes.io/projected/edbde106-63a1-4a4b-91af-bb723902586e-kube-api-access-g7csx\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212332 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbde106-63a1-4a4b-91af-bb723902586e-logs\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.212351 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.214236 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edbde106-63a1-4a4b-91af-bb723902586e-logs\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.217740 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58fcc7f846-vt29r"] Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.217863 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-config-data-custom\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.218997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-combined-ca-bundle\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.220005 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.228461 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.247158 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edbde106-63a1-4a4b-91af-bb723902586e-config-data\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.249730 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58fcc7f846-vt29r"] Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.255575 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7csx\" (UniqueName: \"kubernetes.io/projected/edbde106-63a1-4a4b-91af-bb723902586e-kube-api-access-g7csx\") pod \"barbican-worker-59dc6657f-22jc6\" (UID: \"edbde106-63a1-4a4b-91af-bb723902586e\") " pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313647 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-combined-ca-bundle\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313698 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-config-data-custom\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313772 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313806 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8h6c\" (UniqueName: \"kubernetes.io/projected/3f82a0c0-0894-4267-8031-cb302fee5976-kube-api-access-c8h6c\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313847 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkbv4\" (UniqueName: \"kubernetes.io/projected/a5b466e0-1ece-43ad-8898-7d98ebc952e4-kube-api-access-gkbv4\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313874 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-dns-svc\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313924 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.313982 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-config\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.314014 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-config-data\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.314040 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b466e0-1ece-43ad-8898-7d98ebc952e4-logs\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.315082 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.315160 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.315160 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-config\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.315324 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-dns-svc\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.332817 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8h6c\" (UniqueName: \"kubernetes.io/projected/3f82a0c0-0894-4267-8031-cb302fee5976-kube-api-access-c8h6c\") pod \"dnsmasq-dns-6fbcf576c7-98xt4\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.338676 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59dc6657f-22jc6" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.412758 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415028 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-combined-ca-bundle\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415087 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-config-data-custom\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415109 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-config-data-custom\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415144 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xzg\" (UniqueName: \"kubernetes.io/projected/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-kube-api-access-s8xzg\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415174 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-combined-ca-bundle\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415336 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkbv4\" (UniqueName: \"kubernetes.io/projected/a5b466e0-1ece-43ad-8898-7d98ebc952e4-kube-api-access-gkbv4\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415573 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-config-data\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415659 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-logs\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415812 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-config-data\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.415869 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b466e0-1ece-43ad-8898-7d98ebc952e4-logs\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.417143 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b466e0-1ece-43ad-8898-7d98ebc952e4-logs\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.420005 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-config-data-custom\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.424831 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-combined-ca-bundle\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.432295 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b466e0-1ece-43ad-8898-7d98ebc952e4-config-data\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.433825 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkbv4\" (UniqueName: \"kubernetes.io/projected/a5b466e0-1ece-43ad-8898-7d98ebc952e4-kube-api-access-gkbv4\") pod \"barbican-keystone-listener-78cbb88bb-tbns2\" (UID: \"a5b466e0-1ece-43ad-8898-7d98ebc952e4\") " pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.465723 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.522643 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-logs\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.523246 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-config-data-custom\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.523389 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xzg\" (UniqueName: \"kubernetes.io/projected/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-kube-api-access-s8xzg\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.523441 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-combined-ca-bundle\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.523583 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-config-data\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.525689 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-logs\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.537473 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-config-data\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.540164 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-combined-ca-bundle\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.549282 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-config-data-custom\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.553401 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xzg\" (UniqueName: \"kubernetes.io/projected/5d30a1bc-779c-4c29-aa6c-cc69243f7a32-kube-api-access-s8xzg\") pod \"barbican-api-58fcc7f846-vt29r\" (UID: \"5d30a1bc-779c-4c29-aa6c-cc69243f7a32\") " pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.605749 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.818201 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59dc6657f-22jc6"] Nov 22 12:07:42 crc kubenswrapper[4772]: W1122 12:07:42.822332 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedbde106_63a1_4a4b_91af_bb723902586e.slice/crio-1b2b2d42048380fc2a563f581f61375fc522efc0a13d4ec80f9278382aa90881 WatchSource:0}: Error finding container 1b2b2d42048380fc2a563f581f61375fc522efc0a13d4ec80f9278382aa90881: Status 404 returned error can't find the container with id 1b2b2d42048380fc2a563f581f61375fc522efc0a13d4ec80f9278382aa90881 Nov 22 12:07:42 crc kubenswrapper[4772]: I1122 12:07:42.937004 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbcf576c7-98xt4"] Nov 22 12:07:42 crc kubenswrapper[4772]: W1122 12:07:42.941854 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f82a0c0_0894_4267_8031_cb302fee5976.slice/crio-5fa7c3f660c13bc9fbb6c5b59aeba3500b578c128fc3ed3e155b88535147f8a4 WatchSource:0}: Error finding container 5fa7c3f660c13bc9fbb6c5b59aeba3500b578c128fc3ed3e155b88535147f8a4: Status 404 returned error can't find the container with id 5fa7c3f660c13bc9fbb6c5b59aeba3500b578c128fc3ed3e155b88535147f8a4 Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.059494 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78cbb88bb-tbns2"] Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.200406 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58fcc7f846-vt29r"] Nov 22 12:07:43 crc kubenswrapper[4772]: W1122 12:07:43.230219 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d30a1bc_779c_4c29_aa6c_cc69243f7a32.slice/crio-db29a01d5cc621b891e2476a88d28d841d2b4cba6cc6f9bc067b6a5bfd7a27ec WatchSource:0}: Error finding container db29a01d5cc621b891e2476a88d28d841d2b4cba6cc6f9bc067b6a5bfd7a27ec: Status 404 returned error can't find the container with id db29a01d5cc621b891e2476a88d28d841d2b4cba6cc6f9bc067b6a5bfd7a27ec Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.766811 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dc6657f-22jc6" event={"ID":"edbde106-63a1-4a4b-91af-bb723902586e","Type":"ContainerStarted","Data":"5f344be2df9d82ddde127ddfee178735f2e3c9bdf223d3c126dc954173ddd70b"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.766859 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dc6657f-22jc6" event={"ID":"edbde106-63a1-4a4b-91af-bb723902586e","Type":"ContainerStarted","Data":"09268a4eb8bbcb9407c7879dea6575a7fe0ec816c0cfce5e7028c543c5c112e7"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.766870 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59dc6657f-22jc6" event={"ID":"edbde106-63a1-4a4b-91af-bb723902586e","Type":"ContainerStarted","Data":"1b2b2d42048380fc2a563f581f61375fc522efc0a13d4ec80f9278382aa90881"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.769396 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" event={"ID":"a5b466e0-1ece-43ad-8898-7d98ebc952e4","Type":"ContainerStarted","Data":"08468fd9c718502ae524c619e3a425f843aca8df051c07b37c9acab730fb7753"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.769468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" event={"ID":"a5b466e0-1ece-43ad-8898-7d98ebc952e4","Type":"ContainerStarted","Data":"1cd79185f834479cf0eb61a7a097c3a4d895daaa2e2751a7f29fbcc19a424cfc"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.769482 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" event={"ID":"a5b466e0-1ece-43ad-8898-7d98ebc952e4","Type":"ContainerStarted","Data":"7af3a10a4ef945726eac2fd7008355540d3dab966e9d3605f8c0109ac8d47cfa"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.771608 4772 generic.go:334] "Generic (PLEG): container finished" podID="3f82a0c0-0894-4267-8031-cb302fee5976" containerID="9649c59dc869756da9516aacf3ca8721d2bfaf2dd798c8d8e6838fdbcf60aef2" exitCode=0 Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.771673 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" event={"ID":"3f82a0c0-0894-4267-8031-cb302fee5976","Type":"ContainerDied","Data":"9649c59dc869756da9516aacf3ca8721d2bfaf2dd798c8d8e6838fdbcf60aef2"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.771697 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" event={"ID":"3f82a0c0-0894-4267-8031-cb302fee5976","Type":"ContainerStarted","Data":"5fa7c3f660c13bc9fbb6c5b59aeba3500b578c128fc3ed3e155b88535147f8a4"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.773888 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58fcc7f846-vt29r" event={"ID":"5d30a1bc-779c-4c29-aa6c-cc69243f7a32","Type":"ContainerStarted","Data":"6f00eafdddeaf39a0cdad7440f07bb262e29918238ae3b205f729e7ef80bff9a"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.773937 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58fcc7f846-vt29r" event={"ID":"5d30a1bc-779c-4c29-aa6c-cc69243f7a32","Type":"ContainerStarted","Data":"aa544e3fadb8ce16dbb97fc5533fac9a8024b6734e7316903d4a6dcac3e1efcd"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.773956 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58fcc7f846-vt29r" event={"ID":"5d30a1bc-779c-4c29-aa6c-cc69243f7a32","Type":"ContainerStarted","Data":"db29a01d5cc621b891e2476a88d28d841d2b4cba6cc6f9bc067b6a5bfd7a27ec"} Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.774244 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.774383 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.791523 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59dc6657f-22jc6" podStartSLOduration=2.791504581 podStartE2EDuration="2.791504581s" podCreationTimestamp="2025-11-22 12:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:07:43.783687836 +0000 UTC m=+5384.023132330" watchObservedRunningTime="2025-11-22 12:07:43.791504581 +0000 UTC m=+5384.030949075" Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.811496 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-78cbb88bb-tbns2" podStartSLOduration=1.811476528 podStartE2EDuration="1.811476528s" podCreationTimestamp="2025-11-22 12:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:07:43.801785817 +0000 UTC m=+5384.041230311" watchObservedRunningTime="2025-11-22 12:07:43.811476528 +0000 UTC m=+5384.050921022" Nov 22 12:07:43 crc kubenswrapper[4772]: I1122 12:07:43.841685 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58fcc7f846-vt29r" podStartSLOduration=1.84166512 podStartE2EDuration="1.84166512s" podCreationTimestamp="2025-11-22 12:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:07:43.83362993 +0000 UTC m=+5384.073074434" watchObservedRunningTime="2025-11-22 12:07:43.84166512 +0000 UTC m=+5384.081109614" Nov 22 12:07:44 crc kubenswrapper[4772]: I1122 12:07:44.783994 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" event={"ID":"3f82a0c0-0894-4267-8031-cb302fee5976","Type":"ContainerStarted","Data":"51042e107a74c5b9bccf1bbd42e8e4096185c36baa2b6e97c476594f4763b28f"} Nov 22 12:07:44 crc kubenswrapper[4772]: I1122 12:07:44.812665 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" podStartSLOduration=2.812646445 podStartE2EDuration="2.812646445s" podCreationTimestamp="2025-11-22 12:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:07:44.807501076 +0000 UTC m=+5385.046945570" watchObservedRunningTime="2025-11-22 12:07:44.812646445 +0000 UTC m=+5385.052090939" Nov 22 12:07:45 crc kubenswrapper[4772]: I1122 12:07:45.793949 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:46 crc kubenswrapper[4772]: I1122 12:07:46.413682 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:07:46 crc kubenswrapper[4772]: E1122 12:07:46.414098 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:07:52 crc kubenswrapper[4772]: I1122 12:07:52.418404 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:07:52 crc kubenswrapper[4772]: I1122 12:07:52.500924 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fd5f455c-cs7bm"] Nov 22 12:07:52 crc kubenswrapper[4772]: I1122 12:07:52.501368 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" podUID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerName="dnsmasq-dns" containerID="cri-o://bedb43f929604ee7f38dffb65aff73ab5dd64492f0fa0076d6be32dc0ffb9c99" gracePeriod=10 Nov 22 12:07:52 crc kubenswrapper[4772]: I1122 12:07:52.863295 4772 generic.go:334] "Generic (PLEG): container finished" podID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerID="bedb43f929604ee7f38dffb65aff73ab5dd64492f0fa0076d6be32dc0ffb9c99" exitCode=0 Nov 22 12:07:52 crc kubenswrapper[4772]: I1122 12:07:52.863631 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" event={"ID":"0d506c4b-74c6-436b-a946-daff7ab0f9d6","Type":"ContainerDied","Data":"bedb43f929604ee7f38dffb65aff73ab5dd64492f0fa0076d6be32dc0ffb9c99"} Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.050916 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.133953 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config\") pod \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.134029 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-dns-svc\") pod \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.134101 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-nb\") pod \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.134150 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtqfm\" (UniqueName: \"kubernetes.io/projected/0d506c4b-74c6-436b-a946-daff7ab0f9d6-kube-api-access-jtqfm\") pod \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.134180 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-sb\") pod \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.146341 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d506c4b-74c6-436b-a946-daff7ab0f9d6-kube-api-access-jtqfm" (OuterVolumeSpecName: "kube-api-access-jtqfm") pod "0d506c4b-74c6-436b-a946-daff7ab0f9d6" (UID: "0d506c4b-74c6-436b-a946-daff7ab0f9d6"). InnerVolumeSpecName "kube-api-access-jtqfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.186855 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d506c4b-74c6-436b-a946-daff7ab0f9d6" (UID: "0d506c4b-74c6-436b-a946-daff7ab0f9d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.196273 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d506c4b-74c6-436b-a946-daff7ab0f9d6" (UID: "0d506c4b-74c6-436b-a946-daff7ab0f9d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:07:53 crc kubenswrapper[4772]: E1122 12:07:53.201660 4772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config podName:0d506c4b-74c6-436b-a946-daff7ab0f9d6 nodeName:}" failed. No retries permitted until 2025-11-22 12:07:53.701602702 +0000 UTC m=+5393.941047216 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config") pod "0d506c4b-74c6-436b-a946-daff7ab0f9d6" (UID: "0d506c4b-74c6-436b-a946-daff7ab0f9d6") : error deleting /var/lib/kubelet/pods/0d506c4b-74c6-436b-a946-daff7ab0f9d6/volume-subpaths: remove /var/lib/kubelet/pods/0d506c4b-74c6-436b-a946-daff7ab0f9d6/volume-subpaths: no such file or directory Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.202644 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d506c4b-74c6-436b-a946-daff7ab0f9d6" (UID: "0d506c4b-74c6-436b-a946-daff7ab0f9d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.236802 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.236854 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtqfm\" (UniqueName: \"kubernetes.io/projected/0d506c4b-74c6-436b-a946-daff7ab0f9d6-kube-api-access-jtqfm\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.236866 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.236876 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.744421 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config\") pod \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\" (UID: \"0d506c4b-74c6-436b-a946-daff7ab0f9d6\") " Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.745202 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config" (OuterVolumeSpecName: "config") pod "0d506c4b-74c6-436b-a946-daff7ab0f9d6" (UID: "0d506c4b-74c6-436b-a946-daff7ab0f9d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.846310 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d506c4b-74c6-436b-a946-daff7ab0f9d6-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.875155 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" event={"ID":"0d506c4b-74c6-436b-a946-daff7ab0f9d6","Type":"ContainerDied","Data":"744f9385bb53e0448fc615bfb6bb8e15388a5acf778c5b3bf682196bc7af5edc"} Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.875246 4772 scope.go:117] "RemoveContainer" containerID="bedb43f929604ee7f38dffb65aff73ab5dd64492f0fa0076d6be32dc0ffb9c99" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.875251 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fd5f455c-cs7bm" Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.914007 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fd5f455c-cs7bm"] Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.920043 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78fd5f455c-cs7bm"] Nov 22 12:07:53 crc kubenswrapper[4772]: I1122 12:07:53.966927 4772 scope.go:117] "RemoveContainer" containerID="8438bcedf41aaa01622d40be0c790a688d78795fea580daed8fcf7785ccff3f9" Nov 22 12:07:54 crc kubenswrapper[4772]: I1122 12:07:54.128819 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:54 crc kubenswrapper[4772]: I1122 12:07:54.320109 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58fcc7f846-vt29r" Nov 22 12:07:55 crc kubenswrapper[4772]: I1122 12:07:55.430856 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" path="/var/lib/kubelet/pods/0d506c4b-74c6-436b-a946-daff7ab0f9d6/volumes" Nov 22 12:08:00 crc kubenswrapper[4772]: I1122 12:08:00.413652 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:08:00 crc kubenswrapper[4772]: E1122 12:08:00.414611 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.100181 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hvhfm"] Nov 22 12:08:07 crc kubenswrapper[4772]: E1122 12:08:07.101039 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerName="init" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.101076 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerName="init" Nov 22 12:08:07 crc kubenswrapper[4772]: E1122 12:08:07.101091 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerName="dnsmasq-dns" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.101099 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerName="dnsmasq-dns" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.101307 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d506c4b-74c6-436b-a946-daff7ab0f9d6" containerName="dnsmasq-dns" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.101990 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvhfm" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.123583 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hvhfm"] Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.216311 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4tp\" (UniqueName: \"kubernetes.io/projected/43a4779b-3e60-4259-80b8-3d13029473f2-kube-api-access-mc4tp\") pod \"neutron-db-create-hvhfm\" (UID: \"43a4779b-3e60-4259-80b8-3d13029473f2\") " pod="openstack/neutron-db-create-hvhfm" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.318399 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc4tp\" (UniqueName: \"kubernetes.io/projected/43a4779b-3e60-4259-80b8-3d13029473f2-kube-api-access-mc4tp\") pod \"neutron-db-create-hvhfm\" (UID: \"43a4779b-3e60-4259-80b8-3d13029473f2\") " pod="openstack/neutron-db-create-hvhfm" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.344979 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc4tp\" (UniqueName: \"kubernetes.io/projected/43a4779b-3e60-4259-80b8-3d13029473f2-kube-api-access-mc4tp\") pod \"neutron-db-create-hvhfm\" (UID: \"43a4779b-3e60-4259-80b8-3d13029473f2\") " pod="openstack/neutron-db-create-hvhfm" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.428394 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvhfm" Nov 22 12:08:07 crc kubenswrapper[4772]: I1122 12:08:07.898944 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hvhfm"] Nov 22 12:08:08 crc kubenswrapper[4772]: I1122 12:08:08.012927 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hvhfm" event={"ID":"43a4779b-3e60-4259-80b8-3d13029473f2","Type":"ContainerStarted","Data":"6b5dd03539541a4049fa1eecf6b91430fc468343970954583d7e9b1a8f3c8819"} Nov 22 12:08:09 crc kubenswrapper[4772]: I1122 12:08:09.023857 4772 generic.go:334] "Generic (PLEG): container finished" podID="43a4779b-3e60-4259-80b8-3d13029473f2" containerID="320823e9ab6cfd09600f9e04b66562c8435ffdeabfb07d5431b829ac126c80bf" exitCode=0 Nov 22 12:08:09 crc kubenswrapper[4772]: I1122 12:08:09.024269 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hvhfm" event={"ID":"43a4779b-3e60-4259-80b8-3d13029473f2","Type":"ContainerDied","Data":"320823e9ab6cfd09600f9e04b66562c8435ffdeabfb07d5431b829ac126c80bf"} Nov 22 12:08:10 crc kubenswrapper[4772]: I1122 12:08:10.332011 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvhfm" Nov 22 12:08:10 crc kubenswrapper[4772]: I1122 12:08:10.479000 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc4tp\" (UniqueName: \"kubernetes.io/projected/43a4779b-3e60-4259-80b8-3d13029473f2-kube-api-access-mc4tp\") pod \"43a4779b-3e60-4259-80b8-3d13029473f2\" (UID: \"43a4779b-3e60-4259-80b8-3d13029473f2\") " Nov 22 12:08:10 crc kubenswrapper[4772]: I1122 12:08:10.484411 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a4779b-3e60-4259-80b8-3d13029473f2-kube-api-access-mc4tp" (OuterVolumeSpecName: "kube-api-access-mc4tp") pod "43a4779b-3e60-4259-80b8-3d13029473f2" (UID: "43a4779b-3e60-4259-80b8-3d13029473f2"). InnerVolumeSpecName "kube-api-access-mc4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:08:10 crc kubenswrapper[4772]: I1122 12:08:10.580824 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc4tp\" (UniqueName: \"kubernetes.io/projected/43a4779b-3e60-4259-80b8-3d13029473f2-kube-api-access-mc4tp\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:11 crc kubenswrapper[4772]: I1122 12:08:11.045091 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hvhfm" event={"ID":"43a4779b-3e60-4259-80b8-3d13029473f2","Type":"ContainerDied","Data":"6b5dd03539541a4049fa1eecf6b91430fc468343970954583d7e9b1a8f3c8819"} Nov 22 12:08:11 crc kubenswrapper[4772]: I1122 12:08:11.045128 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5dd03539541a4049fa1eecf6b91430fc468343970954583d7e9b1a8f3c8819" Nov 22 12:08:11 crc kubenswrapper[4772]: I1122 12:08:11.045181 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvhfm" Nov 22 12:08:12 crc kubenswrapper[4772]: I1122 12:08:12.414880 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:08:13 crc kubenswrapper[4772]: I1122 12:08:13.062484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"a0559f4649dac5095b0b01f40c57f009ee300eceb2af79f78144e4e5d01c3049"} Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.158842 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-480f-account-create-96xkv"] Nov 22 12:08:17 crc kubenswrapper[4772]: E1122 12:08:17.159785 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a4779b-3e60-4259-80b8-3d13029473f2" containerName="mariadb-database-create" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.159797 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a4779b-3e60-4259-80b8-3d13029473f2" containerName="mariadb-database-create" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.159961 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a4779b-3e60-4259-80b8-3d13029473f2" containerName="mariadb-database-create" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.160590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-480f-account-create-96xkv" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.163097 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.191226 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-480f-account-create-96xkv"] Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.309182 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77v5w\" (UniqueName: \"kubernetes.io/projected/353fe9c0-a672-4e62-a955-097502ceedf2-kube-api-access-77v5w\") pod \"neutron-480f-account-create-96xkv\" (UID: \"353fe9c0-a672-4e62-a955-097502ceedf2\") " pod="openstack/neutron-480f-account-create-96xkv" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.411265 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77v5w\" (UniqueName: \"kubernetes.io/projected/353fe9c0-a672-4e62-a955-097502ceedf2-kube-api-access-77v5w\") pod \"neutron-480f-account-create-96xkv\" (UID: \"353fe9c0-a672-4e62-a955-097502ceedf2\") " pod="openstack/neutron-480f-account-create-96xkv" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.456923 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77v5w\" (UniqueName: \"kubernetes.io/projected/353fe9c0-a672-4e62-a955-097502ceedf2-kube-api-access-77v5w\") pod \"neutron-480f-account-create-96xkv\" (UID: \"353fe9c0-a672-4e62-a955-097502ceedf2\") " pod="openstack/neutron-480f-account-create-96xkv" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.497472 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-480f-account-create-96xkv" Nov 22 12:08:17 crc kubenswrapper[4772]: I1122 12:08:17.940005 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-480f-account-create-96xkv"] Nov 22 12:08:17 crc kubenswrapper[4772]: W1122 12:08:17.946181 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353fe9c0_a672_4e62_a955_097502ceedf2.slice/crio-c2341ca0f161fe09a8f0d5c05f9fc8750e09f9dc733bcc4b0aa738bc2ee08133 WatchSource:0}: Error finding container c2341ca0f161fe09a8f0d5c05f9fc8750e09f9dc733bcc4b0aa738bc2ee08133: Status 404 returned error can't find the container with id c2341ca0f161fe09a8f0d5c05f9fc8750e09f9dc733bcc4b0aa738bc2ee08133 Nov 22 12:08:18 crc kubenswrapper[4772]: I1122 12:08:18.118071 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-480f-account-create-96xkv" event={"ID":"353fe9c0-a672-4e62-a955-097502ceedf2","Type":"ContainerStarted","Data":"c2341ca0f161fe09a8f0d5c05f9fc8750e09f9dc733bcc4b0aa738bc2ee08133"} Nov 22 12:08:18 crc kubenswrapper[4772]: E1122 12:08:18.402378 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353fe9c0_a672_4e62_a955_097502ceedf2.slice/crio-conmon-2b56640bf9a069465b087a3b974f2dd92ede2c9346e893a7218878240bc5a19a.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:08:19 crc kubenswrapper[4772]: I1122 12:08:19.131570 4772 generic.go:334] "Generic (PLEG): container finished" podID="353fe9c0-a672-4e62-a955-097502ceedf2" containerID="2b56640bf9a069465b087a3b974f2dd92ede2c9346e893a7218878240bc5a19a" exitCode=0 Nov 22 12:08:19 crc kubenswrapper[4772]: I1122 12:08:19.131621 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-480f-account-create-96xkv" event={"ID":"353fe9c0-a672-4e62-a955-097502ceedf2","Type":"ContainerDied","Data":"2b56640bf9a069465b087a3b974f2dd92ede2c9346e893a7218878240bc5a19a"} Nov 22 12:08:20 crc kubenswrapper[4772]: I1122 12:08:20.508951 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-480f-account-create-96xkv" Nov 22 12:08:20 crc kubenswrapper[4772]: I1122 12:08:20.569851 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77v5w\" (UniqueName: \"kubernetes.io/projected/353fe9c0-a672-4e62-a955-097502ceedf2-kube-api-access-77v5w\") pod \"353fe9c0-a672-4e62-a955-097502ceedf2\" (UID: \"353fe9c0-a672-4e62-a955-097502ceedf2\") " Nov 22 12:08:20 crc kubenswrapper[4772]: I1122 12:08:20.580377 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353fe9c0-a672-4e62-a955-097502ceedf2-kube-api-access-77v5w" (OuterVolumeSpecName: "kube-api-access-77v5w") pod "353fe9c0-a672-4e62-a955-097502ceedf2" (UID: "353fe9c0-a672-4e62-a955-097502ceedf2"). InnerVolumeSpecName "kube-api-access-77v5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:08:20 crc kubenswrapper[4772]: I1122 12:08:20.672519 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77v5w\" (UniqueName: \"kubernetes.io/projected/353fe9c0-a672-4e62-a955-097502ceedf2-kube-api-access-77v5w\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:21 crc kubenswrapper[4772]: I1122 12:08:21.158648 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-480f-account-create-96xkv" event={"ID":"353fe9c0-a672-4e62-a955-097502ceedf2","Type":"ContainerDied","Data":"c2341ca0f161fe09a8f0d5c05f9fc8750e09f9dc733bcc4b0aa738bc2ee08133"} Nov 22 12:08:21 crc kubenswrapper[4772]: I1122 12:08:21.158709 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-480f-account-create-96xkv" Nov 22 12:08:21 crc kubenswrapper[4772]: I1122 12:08:21.158717 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2341ca0f161fe09a8f0d5c05f9fc8750e09f9dc733bcc4b0aa738bc2ee08133" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.406171 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fpk6z"] Nov 22 12:08:22 crc kubenswrapper[4772]: E1122 12:08:22.406850 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353fe9c0-a672-4e62-a955-097502ceedf2" containerName="mariadb-account-create" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.406864 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="353fe9c0-a672-4e62-a955-097502ceedf2" containerName="mariadb-account-create" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.407040 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="353fe9c0-a672-4e62-a955-097502ceedf2" containerName="mariadb-account-create" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.407645 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.411498 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mpc5m" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.411601 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.412462 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.415503 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fpk6z"] Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.505149 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t64l\" (UniqueName: \"kubernetes.io/projected/bf86649f-7929-4772-9aad-c40ab73ee61c-kube-api-access-6t64l\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.505711 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-config\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.505818 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-combined-ca-bundle\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.607676 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t64l\" (UniqueName: \"kubernetes.io/projected/bf86649f-7929-4772-9aad-c40ab73ee61c-kube-api-access-6t64l\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.607733 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-config\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.607773 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-combined-ca-bundle\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.614202 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-combined-ca-bundle\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.614259 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-config\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.630671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t64l\" (UniqueName: \"kubernetes.io/projected/bf86649f-7929-4772-9aad-c40ab73ee61c-kube-api-access-6t64l\") pod \"neutron-db-sync-fpk6z\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:22 crc kubenswrapper[4772]: I1122 12:08:22.733871 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:23 crc kubenswrapper[4772]: I1122 12:08:23.209591 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fpk6z"] Nov 22 12:08:24 crc kubenswrapper[4772]: I1122 12:08:24.199282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fpk6z" event={"ID":"bf86649f-7929-4772-9aad-c40ab73ee61c","Type":"ContainerStarted","Data":"d4e54b201244ffb67a1a758ca349d585e16c03b95b4b2eb8b159d201e8ec68f8"} Nov 22 12:08:24 crc kubenswrapper[4772]: I1122 12:08:24.203144 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fpk6z" event={"ID":"bf86649f-7929-4772-9aad-c40ab73ee61c","Type":"ContainerStarted","Data":"0ef6ccf1db95ff554561d12f5dafa0bf472cf7b0b6a8065c4efe62a7175c0c44"} Nov 22 12:08:24 crc kubenswrapper[4772]: I1122 12:08:24.226679 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fpk6z" podStartSLOduration=2.226649749 podStartE2EDuration="2.226649749s" podCreationTimestamp="2025-11-22 12:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:08:24.217557643 +0000 UTC m=+5424.457002157" watchObservedRunningTime="2025-11-22 12:08:24.226649749 +0000 UTC m=+5424.466094253" Nov 22 12:08:28 crc kubenswrapper[4772]: I1122 12:08:28.255949 4772 generic.go:334] "Generic (PLEG): container finished" podID="bf86649f-7929-4772-9aad-c40ab73ee61c" containerID="d4e54b201244ffb67a1a758ca349d585e16c03b95b4b2eb8b159d201e8ec68f8" exitCode=0 Nov 22 12:08:28 crc kubenswrapper[4772]: I1122 12:08:28.256355 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fpk6z" event={"ID":"bf86649f-7929-4772-9aad-c40ab73ee61c","Type":"ContainerDied","Data":"d4e54b201244ffb67a1a758ca349d585e16c03b95b4b2eb8b159d201e8ec68f8"} Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.655752 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.776856 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-config\") pod \"bf86649f-7929-4772-9aad-c40ab73ee61c\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.776994 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t64l\" (UniqueName: \"kubernetes.io/projected/bf86649f-7929-4772-9aad-c40ab73ee61c-kube-api-access-6t64l\") pod \"bf86649f-7929-4772-9aad-c40ab73ee61c\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.777021 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-combined-ca-bundle\") pod \"bf86649f-7929-4772-9aad-c40ab73ee61c\" (UID: \"bf86649f-7929-4772-9aad-c40ab73ee61c\") " Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.786218 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf86649f-7929-4772-9aad-c40ab73ee61c-kube-api-access-6t64l" (OuterVolumeSpecName: "kube-api-access-6t64l") pod "bf86649f-7929-4772-9aad-c40ab73ee61c" (UID: "bf86649f-7929-4772-9aad-c40ab73ee61c"). InnerVolumeSpecName "kube-api-access-6t64l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.817162 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf86649f-7929-4772-9aad-c40ab73ee61c" (UID: "bf86649f-7929-4772-9aad-c40ab73ee61c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.821255 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-config" (OuterVolumeSpecName: "config") pod "bf86649f-7929-4772-9aad-c40ab73ee61c" (UID: "bf86649f-7929-4772-9aad-c40ab73ee61c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.880901 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.880936 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t64l\" (UniqueName: \"kubernetes.io/projected/bf86649f-7929-4772-9aad-c40ab73ee61c-kube-api-access-6t64l\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:29 crc kubenswrapper[4772]: I1122 12:08:29.880955 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf86649f-7929-4772-9aad-c40ab73ee61c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.276749 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fpk6z" event={"ID":"bf86649f-7929-4772-9aad-c40ab73ee61c","Type":"ContainerDied","Data":"0ef6ccf1db95ff554561d12f5dafa0bf472cf7b0b6a8065c4efe62a7175c0c44"} Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.276800 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef6ccf1db95ff554561d12f5dafa0bf472cf7b0b6a8065c4efe62a7175c0c44" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.276852 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fpk6z" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.519620 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-547dcdf95c-t5q9x"] Nov 22 12:08:30 crc kubenswrapper[4772]: E1122 12:08:30.520746 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf86649f-7929-4772-9aad-c40ab73ee61c" containerName="neutron-db-sync" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.520778 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf86649f-7929-4772-9aad-c40ab73ee61c" containerName="neutron-db-sync" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.521031 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf86649f-7929-4772-9aad-c40ab73ee61c" containerName="neutron-db-sync" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.522309 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.554103 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547dcdf95c-t5q9x"] Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.606859 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d6dc9d555-trfk6"] Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.608704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.612263 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mpc5m" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.612570 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.612736 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.630847 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d6dc9d555-trfk6"] Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.707881 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnh94\" (UniqueName: \"kubernetes.io/projected/48903cd6-6a18-4b98-979a-cab6160c1a98-kube-api-access-mnh94\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.707940 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-config\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.707984 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-sb\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.708007 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-httpd-config\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.708266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmrf\" (UniqueName: \"kubernetes.io/projected/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-kube-api-access-blmrf\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.708322 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-dns-svc\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.708349 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-config\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.708618 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-nb\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.708847 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-combined-ca-bundle\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.809998 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-nb\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810096 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-combined-ca-bundle\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810123 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnh94\" (UniqueName: \"kubernetes.io/projected/48903cd6-6a18-4b98-979a-cab6160c1a98-kube-api-access-mnh94\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810140 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-config\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-sb\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-httpd-config\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810266 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmrf\" (UniqueName: \"kubernetes.io/projected/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-kube-api-access-blmrf\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810293 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-dns-svc\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.810318 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-config\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.811255 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-nb\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.811286 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-config\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.811310 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-dns-svc\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.811310 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-sb\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.816973 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-config\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.817309 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-combined-ca-bundle\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.821761 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48903cd6-6a18-4b98-979a-cab6160c1a98-httpd-config\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.832567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnh94\" (UniqueName: \"kubernetes.io/projected/48903cd6-6a18-4b98-979a-cab6160c1a98-kube-api-access-mnh94\") pod \"neutron-6d6dc9d555-trfk6\" (UID: \"48903cd6-6a18-4b98-979a-cab6160c1a98\") " pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.832891 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmrf\" (UniqueName: \"kubernetes.io/projected/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-kube-api-access-blmrf\") pod \"dnsmasq-dns-547dcdf95c-t5q9x\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.844860 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:30 crc kubenswrapper[4772]: I1122 12:08:30.940419 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:31 crc kubenswrapper[4772]: I1122 12:08:31.314319 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547dcdf95c-t5q9x"] Nov 22 12:08:31 crc kubenswrapper[4772]: W1122 12:08:31.321220 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b3a1cec_0da5_4443_b022_dbdc6fd0da13.slice/crio-f30bed1eeab5a603a90747579b7259d335ee984197d2fb79f76d16784d7d44fd WatchSource:0}: Error finding container f30bed1eeab5a603a90747579b7259d335ee984197d2fb79f76d16784d7d44fd: Status 404 returned error can't find the container with id f30bed1eeab5a603a90747579b7259d335ee984197d2fb79f76d16784d7d44fd Nov 22 12:08:31 crc kubenswrapper[4772]: I1122 12:08:31.572214 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d6dc9d555-trfk6"] Nov 22 12:08:31 crc kubenswrapper[4772]: W1122 12:08:31.631523 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48903cd6_6a18_4b98_979a_cab6160c1a98.slice/crio-0c53fd9e8e15c3dcd6ba6cc2ae9c0762a745ed902d662764c14a46d3f58dc9f5 WatchSource:0}: Error finding container 0c53fd9e8e15c3dcd6ba6cc2ae9c0762a745ed902d662764c14a46d3f58dc9f5: Status 404 returned error can't find the container with id 0c53fd9e8e15c3dcd6ba6cc2ae9c0762a745ed902d662764c14a46d3f58dc9f5 Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.296837 4772 generic.go:334] "Generic (PLEG): container finished" podID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerID="f197d0068aadf6795513d21fae9c11e012aa205344cddf23433331c589f20ed0" exitCode=0 Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.296960 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" event={"ID":"7b3a1cec-0da5-4443-b022-dbdc6fd0da13","Type":"ContainerDied","Data":"f197d0068aadf6795513d21fae9c11e012aa205344cddf23433331c589f20ed0"} Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.297414 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" event={"ID":"7b3a1cec-0da5-4443-b022-dbdc6fd0da13","Type":"ContainerStarted","Data":"f30bed1eeab5a603a90747579b7259d335ee984197d2fb79f76d16784d7d44fd"} Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.301582 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d6dc9d555-trfk6" event={"ID":"48903cd6-6a18-4b98-979a-cab6160c1a98","Type":"ContainerStarted","Data":"c246c8f5622c77799c82bd94c4e502c778049c3154f3b0ee11af9bba6c7fc552"} Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.301638 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d6dc9d555-trfk6" event={"ID":"48903cd6-6a18-4b98-979a-cab6160c1a98","Type":"ContainerStarted","Data":"52606cbd2151c8316e29ba3d4da166cbf72f897c3fff13b3a0d4885bca4f9b43"} Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.301652 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d6dc9d555-trfk6" event={"ID":"48903cd6-6a18-4b98-979a-cab6160c1a98","Type":"ContainerStarted","Data":"0c53fd9e8e15c3dcd6ba6cc2ae9c0762a745ed902d662764c14a46d3f58dc9f5"} Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.302535 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:08:32 crc kubenswrapper[4772]: I1122 12:08:32.342338 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d6dc9d555-trfk6" podStartSLOduration=2.342320293 podStartE2EDuration="2.342320293s" podCreationTimestamp="2025-11-22 12:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:08:32.337598375 +0000 UTC m=+5432.577042869" watchObservedRunningTime="2025-11-22 12:08:32.342320293 +0000 UTC m=+5432.581764787" Nov 22 12:08:33 crc kubenswrapper[4772]: I1122 12:08:33.313490 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" event={"ID":"7b3a1cec-0da5-4443-b022-dbdc6fd0da13","Type":"ContainerStarted","Data":"5d837af06c7071b82640ccdcd58ed1fa44d171b6f36c7bdd6b1c355354d3a2d0"} Nov 22 12:08:33 crc kubenswrapper[4772]: I1122 12:08:33.313923 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:33 crc kubenswrapper[4772]: I1122 12:08:33.340502 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" podStartSLOduration=3.340477175 podStartE2EDuration="3.340477175s" podCreationTimestamp="2025-11-22 12:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:08:33.338324681 +0000 UTC m=+5433.577769195" watchObservedRunningTime="2025-11-22 12:08:33.340477175 +0000 UTC m=+5433.579921679" Nov 22 12:08:40 crc kubenswrapper[4772]: I1122 12:08:40.848321 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:08:40 crc kubenswrapper[4772]: I1122 12:08:40.926250 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbcf576c7-98xt4"] Nov 22 12:08:40 crc kubenswrapper[4772]: I1122 12:08:40.929236 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" podUID="3f82a0c0-0894-4267-8031-cb302fee5976" containerName="dnsmasq-dns" containerID="cri-o://51042e107a74c5b9bccf1bbd42e8e4096185c36baa2b6e97c476594f4763b28f" gracePeriod=10 Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.429602 4772 generic.go:334] "Generic (PLEG): container finished" podID="3f82a0c0-0894-4267-8031-cb302fee5976" containerID="51042e107a74c5b9bccf1bbd42e8e4096185c36baa2b6e97c476594f4763b28f" exitCode=0 Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.429669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" event={"ID":"3f82a0c0-0894-4267-8031-cb302fee5976","Type":"ContainerDied","Data":"51042e107a74c5b9bccf1bbd42e8e4096185c36baa2b6e97c476594f4763b28f"} Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.430920 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" event={"ID":"3f82a0c0-0894-4267-8031-cb302fee5976","Type":"ContainerDied","Data":"5fa7c3f660c13bc9fbb6c5b59aeba3500b578c128fc3ed3e155b88535147f8a4"} Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.431030 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fa7c3f660c13bc9fbb6c5b59aeba3500b578c128fc3ed3e155b88535147f8a4" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.514889 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.681696 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-nb\") pod \"3f82a0c0-0894-4267-8031-cb302fee5976\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.681844 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-dns-svc\") pod \"3f82a0c0-0894-4267-8031-cb302fee5976\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.681885 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-config\") pod \"3f82a0c0-0894-4267-8031-cb302fee5976\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.681975 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8h6c\" (UniqueName: \"kubernetes.io/projected/3f82a0c0-0894-4267-8031-cb302fee5976-kube-api-access-c8h6c\") pod \"3f82a0c0-0894-4267-8031-cb302fee5976\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.683119 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-sb\") pod \"3f82a0c0-0894-4267-8031-cb302fee5976\" (UID: \"3f82a0c0-0894-4267-8031-cb302fee5976\") " Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.700633 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f82a0c0-0894-4267-8031-cb302fee5976-kube-api-access-c8h6c" (OuterVolumeSpecName: "kube-api-access-c8h6c") pod "3f82a0c0-0894-4267-8031-cb302fee5976" (UID: "3f82a0c0-0894-4267-8031-cb302fee5976"). InnerVolumeSpecName "kube-api-access-c8h6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.728538 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f82a0c0-0894-4267-8031-cb302fee5976" (UID: "3f82a0c0-0894-4267-8031-cb302fee5976"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.736825 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f82a0c0-0894-4267-8031-cb302fee5976" (UID: "3f82a0c0-0894-4267-8031-cb302fee5976"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.738854 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-config" (OuterVolumeSpecName: "config") pod "3f82a0c0-0894-4267-8031-cb302fee5976" (UID: "3f82a0c0-0894-4267-8031-cb302fee5976"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.739915 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f82a0c0-0894-4267-8031-cb302fee5976" (UID: "3f82a0c0-0894-4267-8031-cb302fee5976"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.785735 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.785785 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.785803 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.785820 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8h6c\" (UniqueName: \"kubernetes.io/projected/3f82a0c0-0894-4267-8031-cb302fee5976-kube-api-access-c8h6c\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:41 crc kubenswrapper[4772]: I1122 12:08:41.785837 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f82a0c0-0894-4267-8031-cb302fee5976-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:08:42 crc kubenswrapper[4772]: I1122 12:08:42.438436 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbcf576c7-98xt4" Nov 22 12:08:42 crc kubenswrapper[4772]: I1122 12:08:42.480221 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbcf576c7-98xt4"] Nov 22 12:08:42 crc kubenswrapper[4772]: I1122 12:08:42.488280 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fbcf576c7-98xt4"] Nov 22 12:08:43 crc kubenswrapper[4772]: I1122 12:08:43.436227 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f82a0c0-0894-4267-8031-cb302fee5976" path="/var/lib/kubelet/pods/3f82a0c0-0894-4267-8031-cb302fee5976/volumes" Nov 22 12:09:00 crc kubenswrapper[4772]: I1122 12:09:00.955125 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d6dc9d555-trfk6" Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.741690 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-j2wlg"] Nov 22 12:09:08 crc kubenswrapper[4772]: E1122 12:09:08.745945 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f82a0c0-0894-4267-8031-cb302fee5976" containerName="dnsmasq-dns" Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.745970 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f82a0c0-0894-4267-8031-cb302fee5976" containerName="dnsmasq-dns" Nov 22 12:09:08 crc kubenswrapper[4772]: E1122 12:09:08.745998 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f82a0c0-0894-4267-8031-cb302fee5976" containerName="init" Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.746008 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f82a0c0-0894-4267-8031-cb302fee5976" containerName="init" Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.746236 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f82a0c0-0894-4267-8031-cb302fee5976" containerName="dnsmasq-dns" Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.747247 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j2wlg" Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.758592 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j2wlg"] Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.865989 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktj7l\" (UniqueName: \"kubernetes.io/projected/c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4-kube-api-access-ktj7l\") pod \"glance-db-create-j2wlg\" (UID: \"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4\") " pod="openstack/glance-db-create-j2wlg" Nov 22 12:09:08 crc kubenswrapper[4772]: I1122 12:09:08.968284 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktj7l\" (UniqueName: \"kubernetes.io/projected/c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4-kube-api-access-ktj7l\") pod \"glance-db-create-j2wlg\" (UID: \"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4\") " pod="openstack/glance-db-create-j2wlg" Nov 22 12:09:09 crc kubenswrapper[4772]: I1122 12:09:09.003096 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktj7l\" (UniqueName: \"kubernetes.io/projected/c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4-kube-api-access-ktj7l\") pod \"glance-db-create-j2wlg\" (UID: \"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4\") " pod="openstack/glance-db-create-j2wlg" Nov 22 12:09:09 crc kubenswrapper[4772]: I1122 12:09:09.109803 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j2wlg" Nov 22 12:09:09 crc kubenswrapper[4772]: I1122 12:09:09.645770 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j2wlg"] Nov 22 12:09:09 crc kubenswrapper[4772]: I1122 12:09:09.806101 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j2wlg" event={"ID":"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4","Type":"ContainerStarted","Data":"1d6b5a5f4f8926fb7bd75702ac2478435c76ce5f6f7524cedb05f2aca4acbbdc"} Nov 22 12:09:10 crc kubenswrapper[4772]: I1122 12:09:10.822363 4772 generic.go:334] "Generic (PLEG): container finished" podID="c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4" containerID="cfb4d65ba1f710d3c02ef992a8ddc9b0596ad90f539deb9a08e33bcb852265a0" exitCode=0 Nov 22 12:09:10 crc kubenswrapper[4772]: I1122 12:09:10.822742 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j2wlg" event={"ID":"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4","Type":"ContainerDied","Data":"cfb4d65ba1f710d3c02ef992a8ddc9b0596ad90f539deb9a08e33bcb852265a0"} Nov 22 12:09:11 crc kubenswrapper[4772]: I1122 12:09:11.316991 4772 scope.go:117] "RemoveContainer" containerID="1c1bc1ad1131f888a9c9e5dd9b342803950c3de8fecc49420c04a0758d15f687" Nov 22 12:09:11 crc kubenswrapper[4772]: I1122 12:09:11.342523 4772 scope.go:117] "RemoveContainer" containerID="4399d3a7fa52bb6d57e98ffc49c2b3cac4a1373b2076a659bad2889cc160a6a2" Nov 22 12:09:11 crc kubenswrapper[4772]: I1122 12:09:11.395256 4772 scope.go:117] "RemoveContainer" containerID="6164057f501470d044536b06289f46f18685d90d5c296697fe9647887a13ac40" Nov 22 12:09:12 crc kubenswrapper[4772]: I1122 12:09:12.221406 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j2wlg" Nov 22 12:09:12 crc kubenswrapper[4772]: I1122 12:09:12.361005 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktj7l\" (UniqueName: \"kubernetes.io/projected/c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4-kube-api-access-ktj7l\") pod \"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4\" (UID: \"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4\") " Nov 22 12:09:12 crc kubenswrapper[4772]: I1122 12:09:12.368531 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4-kube-api-access-ktj7l" (OuterVolumeSpecName: "kube-api-access-ktj7l") pod "c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4" (UID: "c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4"). InnerVolumeSpecName "kube-api-access-ktj7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:12 crc kubenswrapper[4772]: I1122 12:09:12.464216 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktj7l\" (UniqueName: \"kubernetes.io/projected/c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4-kube-api-access-ktj7l\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:12 crc kubenswrapper[4772]: I1122 12:09:12.851746 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j2wlg" event={"ID":"c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4","Type":"ContainerDied","Data":"1d6b5a5f4f8926fb7bd75702ac2478435c76ce5f6f7524cedb05f2aca4acbbdc"} Nov 22 12:09:12 crc kubenswrapper[4772]: I1122 12:09:12.851808 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6b5a5f4f8926fb7bd75702ac2478435c76ce5f6f7524cedb05f2aca4acbbdc" Nov 22 12:09:12 crc kubenswrapper[4772]: I1122 12:09:12.851894 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j2wlg" Nov 22 12:09:18 crc kubenswrapper[4772]: I1122 12:09:18.795308 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c044-account-create-m5bhq"] Nov 22 12:09:18 crc kubenswrapper[4772]: E1122 12:09:18.796953 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4" containerName="mariadb-database-create" Nov 22 12:09:18 crc kubenswrapper[4772]: I1122 12:09:18.796976 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4" containerName="mariadb-database-create" Nov 22 12:09:18 crc kubenswrapper[4772]: I1122 12:09:18.797318 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4" containerName="mariadb-database-create" Nov 22 12:09:18 crc kubenswrapper[4772]: I1122 12:09:18.798441 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c044-account-create-m5bhq" Nov 22 12:09:18 crc kubenswrapper[4772]: I1122 12:09:18.801171 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 12:09:18 crc kubenswrapper[4772]: I1122 12:09:18.806500 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c044-account-create-m5bhq"] Nov 22 12:09:18 crc kubenswrapper[4772]: I1122 12:09:18.902810 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxcs\" (UniqueName: \"kubernetes.io/projected/f7c35d6d-c880-4170-95bf-426d5556c009-kube-api-access-wjxcs\") pod \"glance-c044-account-create-m5bhq\" (UID: \"f7c35d6d-c880-4170-95bf-426d5556c009\") " pod="openstack/glance-c044-account-create-m5bhq" Nov 22 12:09:19 crc kubenswrapper[4772]: I1122 12:09:19.005357 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjxcs\" (UniqueName: \"kubernetes.io/projected/f7c35d6d-c880-4170-95bf-426d5556c009-kube-api-access-wjxcs\") pod \"glance-c044-account-create-m5bhq\" (UID: \"f7c35d6d-c880-4170-95bf-426d5556c009\") " pod="openstack/glance-c044-account-create-m5bhq" Nov 22 12:09:19 crc kubenswrapper[4772]: I1122 12:09:19.040036 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjxcs\" (UniqueName: \"kubernetes.io/projected/f7c35d6d-c880-4170-95bf-426d5556c009-kube-api-access-wjxcs\") pod \"glance-c044-account-create-m5bhq\" (UID: \"f7c35d6d-c880-4170-95bf-426d5556c009\") " pod="openstack/glance-c044-account-create-m5bhq" Nov 22 12:09:19 crc kubenswrapper[4772]: I1122 12:09:19.135959 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c044-account-create-m5bhq" Nov 22 12:09:19 crc kubenswrapper[4772]: I1122 12:09:19.443033 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c044-account-create-m5bhq"] Nov 22 12:09:19 crc kubenswrapper[4772]: I1122 12:09:19.931959 4772 generic.go:334] "Generic (PLEG): container finished" podID="f7c35d6d-c880-4170-95bf-426d5556c009" containerID="b5ec57191c0978c3d507d4baff556d34681508a4103511d06ee0149ed36f3693" exitCode=0 Nov 22 12:09:19 crc kubenswrapper[4772]: I1122 12:09:19.932033 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c044-account-create-m5bhq" event={"ID":"f7c35d6d-c880-4170-95bf-426d5556c009","Type":"ContainerDied","Data":"b5ec57191c0978c3d507d4baff556d34681508a4103511d06ee0149ed36f3693"} Nov 22 12:09:19 crc kubenswrapper[4772]: I1122 12:09:19.932216 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c044-account-create-m5bhq" event={"ID":"f7c35d6d-c880-4170-95bf-426d5556c009","Type":"ContainerStarted","Data":"82b1508a6036ecc341ceff85f1c7820b610fa4f93cdaf50089cb5d4e369a5b9a"} Nov 22 12:09:21 crc kubenswrapper[4772]: I1122 12:09:21.389977 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c044-account-create-m5bhq" Nov 22 12:09:21 crc kubenswrapper[4772]: I1122 12:09:21.455240 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjxcs\" (UniqueName: \"kubernetes.io/projected/f7c35d6d-c880-4170-95bf-426d5556c009-kube-api-access-wjxcs\") pod \"f7c35d6d-c880-4170-95bf-426d5556c009\" (UID: \"f7c35d6d-c880-4170-95bf-426d5556c009\") " Nov 22 12:09:21 crc kubenswrapper[4772]: I1122 12:09:21.467768 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c35d6d-c880-4170-95bf-426d5556c009-kube-api-access-wjxcs" (OuterVolumeSpecName: "kube-api-access-wjxcs") pod "f7c35d6d-c880-4170-95bf-426d5556c009" (UID: "f7c35d6d-c880-4170-95bf-426d5556c009"). InnerVolumeSpecName "kube-api-access-wjxcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:21 crc kubenswrapper[4772]: I1122 12:09:21.558475 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjxcs\" (UniqueName: \"kubernetes.io/projected/f7c35d6d-c880-4170-95bf-426d5556c009-kube-api-access-wjxcs\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:21 crc kubenswrapper[4772]: I1122 12:09:21.996664 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c044-account-create-m5bhq" event={"ID":"f7c35d6d-c880-4170-95bf-426d5556c009","Type":"ContainerDied","Data":"82b1508a6036ecc341ceff85f1c7820b610fa4f93cdaf50089cb5d4e369a5b9a"} Nov 22 12:09:21 crc kubenswrapper[4772]: I1122 12:09:21.996738 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b1508a6036ecc341ceff85f1c7820b610fa4f93cdaf50089cb5d4e369a5b9a" Nov 22 12:09:21 crc kubenswrapper[4772]: I1122 12:09:21.996849 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c044-account-create-m5bhq" Nov 22 12:09:23 crc kubenswrapper[4772]: I1122 12:09:23.930503 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-54vnx"] Nov 22 12:09:23 crc kubenswrapper[4772]: E1122 12:09:23.931381 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c35d6d-c880-4170-95bf-426d5556c009" containerName="mariadb-account-create" Nov 22 12:09:23 crc kubenswrapper[4772]: I1122 12:09:23.931398 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c35d6d-c880-4170-95bf-426d5556c009" containerName="mariadb-account-create" Nov 22 12:09:23 crc kubenswrapper[4772]: I1122 12:09:23.931658 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c35d6d-c880-4170-95bf-426d5556c009" containerName="mariadb-account-create" Nov 22 12:09:23 crc kubenswrapper[4772]: I1122 12:09:23.932740 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:23 crc kubenswrapper[4772]: I1122 12:09:23.936631 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6zt2p" Nov 22 12:09:23 crc kubenswrapper[4772]: I1122 12:09:23.936809 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 22 12:09:23 crc kubenswrapper[4772]: I1122 12:09:23.945130 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-54vnx"] Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.037793 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-config-data\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.037850 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6vgg\" (UniqueName: \"kubernetes.io/projected/983e95e3-73ea-42da-8839-a3c85916c3c1-kube-api-access-v6vgg\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.038021 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-db-sync-config-data\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.038251 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-combined-ca-bundle\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.139621 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-config-data\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.139698 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6vgg\" (UniqueName: \"kubernetes.io/projected/983e95e3-73ea-42da-8839-a3c85916c3c1-kube-api-access-v6vgg\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.139753 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-db-sync-config-data\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.139838 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-combined-ca-bundle\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.146193 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-db-sync-config-data\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.146567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-config-data\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.156774 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-combined-ca-bundle\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.172035 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6vgg\" (UniqueName: \"kubernetes.io/projected/983e95e3-73ea-42da-8839-a3c85916c3c1-kube-api-access-v6vgg\") pod \"glance-db-sync-54vnx\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.261709 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:24 crc kubenswrapper[4772]: I1122 12:09:24.914473 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-54vnx"] Nov 22 12:09:25 crc kubenswrapper[4772]: I1122 12:09:25.056560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-54vnx" event={"ID":"983e95e3-73ea-42da-8839-a3c85916c3c1","Type":"ContainerStarted","Data":"7bda04c4c04b857117e114b1faa83c7f25c6d714525b97d0e38cb229118aaa1c"} Nov 22 12:09:26 crc kubenswrapper[4772]: I1122 12:09:26.066631 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-54vnx" event={"ID":"983e95e3-73ea-42da-8839-a3c85916c3c1","Type":"ContainerStarted","Data":"7a372b7aa4f7b48ca0760c91e0e7cbb3f6158ba77a0a964058c1fcc4ef9177bd"} Nov 22 12:09:26 crc kubenswrapper[4772]: I1122 12:09:26.083792 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-54vnx" podStartSLOduration=3.083768829 podStartE2EDuration="3.083768829s" podCreationTimestamp="2025-11-22 12:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:09:26.08339977 +0000 UTC m=+5486.322844294" watchObservedRunningTime="2025-11-22 12:09:26.083768829 +0000 UTC m=+5486.323213323" Nov 22 12:09:29 crc kubenswrapper[4772]: I1122 12:09:29.104217 4772 generic.go:334] "Generic (PLEG): container finished" podID="983e95e3-73ea-42da-8839-a3c85916c3c1" containerID="7a372b7aa4f7b48ca0760c91e0e7cbb3f6158ba77a0a964058c1fcc4ef9177bd" exitCode=0 Nov 22 12:09:29 crc kubenswrapper[4772]: I1122 12:09:29.104356 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-54vnx" event={"ID":"983e95e3-73ea-42da-8839-a3c85916c3c1","Type":"ContainerDied","Data":"7a372b7aa4f7b48ca0760c91e0e7cbb3f6158ba77a0a964058c1fcc4ef9177bd"} Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.623156 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.689123 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-combined-ca-bundle\") pod \"983e95e3-73ea-42da-8839-a3c85916c3c1\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.689227 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-db-sync-config-data\") pod \"983e95e3-73ea-42da-8839-a3c85916c3c1\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.689313 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6vgg\" (UniqueName: \"kubernetes.io/projected/983e95e3-73ea-42da-8839-a3c85916c3c1-kube-api-access-v6vgg\") pod \"983e95e3-73ea-42da-8839-a3c85916c3c1\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.689349 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-config-data\") pod \"983e95e3-73ea-42da-8839-a3c85916c3c1\" (UID: \"983e95e3-73ea-42da-8839-a3c85916c3c1\") " Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.694858 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983e95e3-73ea-42da-8839-a3c85916c3c1-kube-api-access-v6vgg" (OuterVolumeSpecName: "kube-api-access-v6vgg") pod "983e95e3-73ea-42da-8839-a3c85916c3c1" (UID: "983e95e3-73ea-42da-8839-a3c85916c3c1"). InnerVolumeSpecName "kube-api-access-v6vgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.696327 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "983e95e3-73ea-42da-8839-a3c85916c3c1" (UID: "983e95e3-73ea-42da-8839-a3c85916c3c1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.732102 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "983e95e3-73ea-42da-8839-a3c85916c3c1" (UID: "983e95e3-73ea-42da-8839-a3c85916c3c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.738096 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-config-data" (OuterVolumeSpecName: "config-data") pod "983e95e3-73ea-42da-8839-a3c85916c3c1" (UID: "983e95e3-73ea-42da-8839-a3c85916c3c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.792592 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6vgg\" (UniqueName: \"kubernetes.io/projected/983e95e3-73ea-42da-8839-a3c85916c3c1-kube-api-access-v6vgg\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.792654 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.792679 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:30 crc kubenswrapper[4772]: I1122 12:09:30.792700 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983e95e3-73ea-42da-8839-a3c85916c3c1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.136604 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-54vnx" event={"ID":"983e95e3-73ea-42da-8839-a3c85916c3c1","Type":"ContainerDied","Data":"7bda04c4c04b857117e114b1faa83c7f25c6d714525b97d0e38cb229118aaa1c"} Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.136671 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bda04c4c04b857117e114b1faa83c7f25c6d714525b97d0e38cb229118aaa1c" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.136784 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-54vnx" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.511912 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:31 crc kubenswrapper[4772]: E1122 12:09:31.512308 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983e95e3-73ea-42da-8839-a3c85916c3c1" containerName="glance-db-sync" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.512323 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="983e95e3-73ea-42da-8839-a3c85916c3c1" containerName="glance-db-sync" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.512511 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="983e95e3-73ea-42da-8839-a3c85916c3c1" containerName="glance-db-sync" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.513405 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.516577 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.520333 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6zt2p" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.520587 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.520907 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.535401 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.546601 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77cf987597-m5pcd"] Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.548039 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.554029 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cf987597-m5pcd"] Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.610761 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jnd\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-kube-api-access-42jnd\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.610848 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmpcv\" (UniqueName: \"kubernetes.io/projected/920c406d-db29-4d20-b46b-9da06425cf90-kube-api-access-xmpcv\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.610893 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-logs\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.610921 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611039 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611116 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-config\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-nb\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611247 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-sb\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611332 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-ceph\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611360 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611405 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.611433 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-dns-svc\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.654034 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.655503 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.658690 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.668615 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712528 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-dns-svc\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712639 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jnd\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-kube-api-access-42jnd\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmpcv\" (UniqueName: \"kubernetes.io/projected/920c406d-db29-4d20-b46b-9da06425cf90-kube-api-access-xmpcv\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712684 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-logs\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712706 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712772 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712795 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-config\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712834 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-nb\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712850 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-sb\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712905 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-ceph\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.712922 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.713919 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.713997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-logs\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.714563 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-nb\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.714583 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-dns-svc\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.714615 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-sb\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.714663 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-config\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.721071 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.722920 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.723087 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.729468 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmpcv\" (UniqueName: \"kubernetes.io/projected/920c406d-db29-4d20-b46b-9da06425cf90-kube-api-access-xmpcv\") pod \"dnsmasq-dns-77cf987597-m5pcd\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.730523 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-ceph\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.733680 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jnd\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-kube-api-access-42jnd\") pod \"glance-default-external-api-0\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.814104 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-ceph\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.814164 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhh6m\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-kube-api-access-dhh6m\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.814240 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-logs\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.814266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.814289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-config-data\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.814335 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-scripts\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.814354 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.839462 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.872041 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.915751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhh6m\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-kube-api-access-dhh6m\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.916289 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-logs\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.916341 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.916382 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-config-data\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.916503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-scripts\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.916536 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.916563 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-ceph\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.916925 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-logs\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.917091 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.921647 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-scripts\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.923724 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.923747 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-config-data\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.933537 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-ceph\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.934344 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhh6m\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-kube-api-access-dhh6m\") pod \"glance-default-internal-api-0\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:31 crc kubenswrapper[4772]: I1122 12:09:31.972194 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:32 crc kubenswrapper[4772]: I1122 12:09:32.364636 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cf987597-m5pcd"] Nov 22 12:09:32 crc kubenswrapper[4772]: W1122 12:09:32.367188 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod920c406d_db29_4d20_b46b_9da06425cf90.slice/crio-0ad624239e3e07e0a3766a3cf1c2e834df0463be7e9537f57b3da20ab6c6d059 WatchSource:0}: Error finding container 0ad624239e3e07e0a3766a3cf1c2e834df0463be7e9537f57b3da20ab6c6d059: Status 404 returned error can't find the container with id 0ad624239e3e07e0a3766a3cf1c2e834df0463be7e9537f57b3da20ab6c6d059 Nov 22 12:09:32 crc kubenswrapper[4772]: I1122 12:09:32.468249 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:32 crc kubenswrapper[4772]: I1122 12:09:32.562847 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:32 crc kubenswrapper[4772]: W1122 12:09:32.574163 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7ba73ff_bf2d_4dcd_bf53_45f09c2958f1.slice/crio-ec11b76919b7434c71cb4d3457b84edee088e85b9f45d1d7b00ab21c0d4f4ec6 WatchSource:0}: Error finding container ec11b76919b7434c71cb4d3457b84edee088e85b9f45d1d7b00ab21c0d4f4ec6: Status 404 returned error can't find the container with id ec11b76919b7434c71cb4d3457b84edee088e85b9f45d1d7b00ab21c0d4f4ec6 Nov 22 12:09:32 crc kubenswrapper[4772]: I1122 12:09:32.688019 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:32 crc kubenswrapper[4772]: W1122 12:09:32.713854 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod108c431a_43ae_438b_9156_0fffab405860.slice/crio-e2d441932d3664fed169852c5dbc949c38b98034a862db0b462c545a5951b906 WatchSource:0}: Error finding container e2d441932d3664fed169852c5dbc949c38b98034a862db0b462c545a5951b906: Status 404 returned error can't find the container with id e2d441932d3664fed169852c5dbc949c38b98034a862db0b462c545a5951b906 Nov 22 12:09:33 crc kubenswrapper[4772]: I1122 12:09:33.192413 4772 generic.go:334] "Generic (PLEG): container finished" podID="920c406d-db29-4d20-b46b-9da06425cf90" containerID="9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f" exitCode=0 Nov 22 12:09:33 crc kubenswrapper[4772]: I1122 12:09:33.192514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" event={"ID":"920c406d-db29-4d20-b46b-9da06425cf90","Type":"ContainerDied","Data":"9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f"} Nov 22 12:09:33 crc kubenswrapper[4772]: I1122 12:09:33.192567 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" event={"ID":"920c406d-db29-4d20-b46b-9da06425cf90","Type":"ContainerStarted","Data":"0ad624239e3e07e0a3766a3cf1c2e834df0463be7e9537f57b3da20ab6c6d059"} Nov 22 12:09:33 crc kubenswrapper[4772]: I1122 12:09:33.228397 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"108c431a-43ae-438b-9156-0fffab405860","Type":"ContainerStarted","Data":"e2d441932d3664fed169852c5dbc949c38b98034a862db0b462c545a5951b906"} Nov 22 12:09:33 crc kubenswrapper[4772]: I1122 12:09:33.240699 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1","Type":"ContainerStarted","Data":"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b"} Nov 22 12:09:33 crc kubenswrapper[4772]: I1122 12:09:33.240747 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1","Type":"ContainerStarted","Data":"ec11b76919b7434c71cb4d3457b84edee088e85b9f45d1d7b00ab21c0d4f4ec6"} Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.250412 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-log" containerID="cri-o://eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b" gracePeriod=30 Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.250450 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-httpd" containerID="cri-o://1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111" gracePeriod=30 Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.250299 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1","Type":"ContainerStarted","Data":"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111"} Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.252119 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" event={"ID":"920c406d-db29-4d20-b46b-9da06425cf90","Type":"ContainerStarted","Data":"6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a"} Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.253011 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.254686 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"108c431a-43ae-438b-9156-0fffab405860","Type":"ContainerStarted","Data":"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5"} Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.254744 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"108c431a-43ae-438b-9156-0fffab405860","Type":"ContainerStarted","Data":"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa"} Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.278306 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.278283245 podStartE2EDuration="3.278283245s" podCreationTimestamp="2025-11-22 12:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:09:34.26842343 +0000 UTC m=+5494.507867924" watchObservedRunningTime="2025-11-22 12:09:34.278283245 +0000 UTC m=+5494.517727739" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.300028 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" podStartSLOduration=3.299956315 podStartE2EDuration="3.299956315s" podCreationTimestamp="2025-11-22 12:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:09:34.298796336 +0000 UTC m=+5494.538240860" watchObservedRunningTime="2025-11-22 12:09:34.299956315 +0000 UTC m=+5494.539400809" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.323412 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.323383368 podStartE2EDuration="3.323383368s" podCreationTimestamp="2025-11-22 12:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:09:34.317410969 +0000 UTC m=+5494.556855483" watchObservedRunningTime="2025-11-22 12:09:34.323383368 +0000 UTC m=+5494.562827862" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.348363 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.874928 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985211 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-logs\") pod \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985272 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-scripts\") pod \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985335 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42jnd\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-kube-api-access-42jnd\") pod \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985400 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-combined-ca-bundle\") pod \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985423 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-httpd-run\") pod \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985485 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-ceph\") pod \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985528 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-config-data\") pod \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\" (UID: \"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1\") " Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985780 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-logs" (OuterVolumeSpecName: "logs") pod "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" (UID: "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.985808 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" (UID: "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.986178 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.986197 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.992599 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-scripts" (OuterVolumeSpecName: "scripts") pod "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" (UID: "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.996373 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-ceph" (OuterVolumeSpecName: "ceph") pod "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" (UID: "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:34 crc kubenswrapper[4772]: I1122 12:09:34.997614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-kube-api-access-42jnd" (OuterVolumeSpecName: "kube-api-access-42jnd") pod "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" (UID: "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1"). InnerVolumeSpecName "kube-api-access-42jnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.031980 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" (UID: "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.033594 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-config-data" (OuterVolumeSpecName: "config-data") pod "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" (UID: "a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.088276 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.088313 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42jnd\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-kube-api-access-42jnd\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.088329 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.088341 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.088352 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.264403 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerID="1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111" exitCode=0 Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.264437 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerID="eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b" exitCode=143 Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.264474 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1","Type":"ContainerDied","Data":"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111"} Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.264542 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1","Type":"ContainerDied","Data":"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b"} Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.264589 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1","Type":"ContainerDied","Data":"ec11b76919b7434c71cb4d3457b84edee088e85b9f45d1d7b00ab21c0d4f4ec6"} Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.264611 4772 scope.go:117] "RemoveContainer" containerID="1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.264495 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.298695 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.301678 4772 scope.go:117] "RemoveContainer" containerID="eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.304347 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.324412 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:35 crc kubenswrapper[4772]: E1122 12:09:35.324843 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-httpd" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.324866 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-httpd" Nov 22 12:09:35 crc kubenswrapper[4772]: E1122 12:09:35.324893 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-log" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.324900 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-log" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.325086 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-log" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.325096 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" containerName="glance-httpd" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.334402 4772 scope.go:117] "RemoveContainer" containerID="1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111" Nov 22 12:09:35 crc kubenswrapper[4772]: E1122 12:09:35.335015 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111\": container with ID starting with 1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111 not found: ID does not exist" containerID="1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.335172 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111"} err="failed to get container status \"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111\": rpc error: code = NotFound desc = could not find container \"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111\": container with ID starting with 1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111 not found: ID does not exist" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.335221 4772 scope.go:117] "RemoveContainer" containerID="eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b" Nov 22 12:09:35 crc kubenswrapper[4772]: E1122 12:09:35.335611 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b\": container with ID starting with eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b not found: ID does not exist" containerID="eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.335657 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b"} err="failed to get container status \"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b\": rpc error: code = NotFound desc = could not find container \"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b\": container with ID starting with eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b not found: ID does not exist" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.335690 4772 scope.go:117] "RemoveContainer" containerID="1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.338674 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111"} err="failed to get container status \"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111\": rpc error: code = NotFound desc = could not find container \"1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111\": container with ID starting with 1eb085ca7a3073b2c49a807683bb3758dd1155cbbef6e6063b647c26ea58e111 not found: ID does not exist" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.338706 4772 scope.go:117] "RemoveContainer" containerID="eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.338994 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b"} err="failed to get container status \"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b\": rpc error: code = NotFound desc = could not find container \"eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b\": container with ID starting with eab291ab38da6311f428b608dc1a51bd726e9429e2ff4c699449d3878f2b552b not found: ID does not exist" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.339569 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.349658 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.351462 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.396295 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-config-data\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.396355 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-logs\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.396454 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfq5r\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-kube-api-access-qfq5r\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.396515 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.396549 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-scripts\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.396572 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.396668 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-ceph\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.447317 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1" path="/var/lib/kubelet/pods/a7ba73ff-bf2d-4dcd-bf53-45f09c2958f1/volumes" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.500759 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-config-data\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.500801 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-logs\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.500878 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfq5r\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-kube-api-access-qfq5r\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.500904 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.500931 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-scripts\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.500949 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.500992 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-ceph\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.502838 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-logs\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.504341 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.505617 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-ceph\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.507177 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-scripts\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.507808 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.510121 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-config-data\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.540909 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfq5r\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-kube-api-access-qfq5r\") pod \"glance-default-external-api-0\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " pod="openstack/glance-default-external-api-0" Nov 22 12:09:35 crc kubenswrapper[4772]: I1122 12:09:35.655137 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:09:36 crc kubenswrapper[4772]: W1122 12:09:36.210490 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2af13a2_6cfc_469f_a829_b2cecbfd7129.slice/crio-c1e1e8072f3ae7ed10e651d41cd71a2bd4807791936532ac034438ebbfbaf6e9 WatchSource:0}: Error finding container c1e1e8072f3ae7ed10e651d41cd71a2bd4807791936532ac034438ebbfbaf6e9: Status 404 returned error can't find the container with id c1e1e8072f3ae7ed10e651d41cd71a2bd4807791936532ac034438ebbfbaf6e9 Nov 22 12:09:36 crc kubenswrapper[4772]: I1122 12:09:36.211075 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:09:36 crc kubenswrapper[4772]: I1122 12:09:36.278424 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2af13a2-6cfc-469f-a829-b2cecbfd7129","Type":"ContainerStarted","Data":"c1e1e8072f3ae7ed10e651d41cd71a2bd4807791936532ac034438ebbfbaf6e9"} Nov 22 12:09:36 crc kubenswrapper[4772]: I1122 12:09:36.279099 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-log" containerID="cri-o://2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa" gracePeriod=30 Nov 22 12:09:36 crc kubenswrapper[4772]: I1122 12:09:36.279226 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-httpd" containerID="cri-o://90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5" gracePeriod=30 Nov 22 12:09:36 crc kubenswrapper[4772]: I1122 12:09:36.967799 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.037070 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-httpd-run\") pod \"108c431a-43ae-438b-9156-0fffab405860\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.037136 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-config-data\") pod \"108c431a-43ae-438b-9156-0fffab405860\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.037165 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-logs\") pod \"108c431a-43ae-438b-9156-0fffab405860\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.037206 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-ceph\") pod \"108c431a-43ae-438b-9156-0fffab405860\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.037290 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-combined-ca-bundle\") pod \"108c431a-43ae-438b-9156-0fffab405860\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.037359 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhh6m\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-kube-api-access-dhh6m\") pod \"108c431a-43ae-438b-9156-0fffab405860\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.037414 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-scripts\") pod \"108c431a-43ae-438b-9156-0fffab405860\" (UID: \"108c431a-43ae-438b-9156-0fffab405860\") " Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.038286 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "108c431a-43ae-438b-9156-0fffab405860" (UID: "108c431a-43ae-438b-9156-0fffab405860"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.038587 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-logs" (OuterVolumeSpecName: "logs") pod "108c431a-43ae-438b-9156-0fffab405860" (UID: "108c431a-43ae-438b-9156-0fffab405860"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.042253 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-kube-api-access-dhh6m" (OuterVolumeSpecName: "kube-api-access-dhh6m") pod "108c431a-43ae-438b-9156-0fffab405860" (UID: "108c431a-43ae-438b-9156-0fffab405860"). InnerVolumeSpecName "kube-api-access-dhh6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.042455 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-ceph" (OuterVolumeSpecName: "ceph") pod "108c431a-43ae-438b-9156-0fffab405860" (UID: "108c431a-43ae-438b-9156-0fffab405860"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.045421 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-scripts" (OuterVolumeSpecName: "scripts") pod "108c431a-43ae-438b-9156-0fffab405860" (UID: "108c431a-43ae-438b-9156-0fffab405860"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.065738 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "108c431a-43ae-438b-9156-0fffab405860" (UID: "108c431a-43ae-438b-9156-0fffab405860"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.089447 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-config-data" (OuterVolumeSpecName: "config-data") pod "108c431a-43ae-438b-9156-0fffab405860" (UID: "108c431a-43ae-438b-9156-0fffab405860"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.139337 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhh6m\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-kube-api-access-dhh6m\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.139364 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.139375 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.139386 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.139395 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/108c431a-43ae-438b-9156-0fffab405860-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.139403 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/108c431a-43ae-438b-9156-0fffab405860-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.139411 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108c431a-43ae-438b-9156-0fffab405860-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.291398 4772 generic.go:334] "Generic (PLEG): container finished" podID="108c431a-43ae-438b-9156-0fffab405860" containerID="90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5" exitCode=0 Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.291432 4772 generic.go:334] "Generic (PLEG): container finished" podID="108c431a-43ae-438b-9156-0fffab405860" containerID="2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa" exitCode=143 Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.291492 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"108c431a-43ae-438b-9156-0fffab405860","Type":"ContainerDied","Data":"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5"} Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.291521 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"108c431a-43ae-438b-9156-0fffab405860","Type":"ContainerDied","Data":"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa"} Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.291531 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"108c431a-43ae-438b-9156-0fffab405860","Type":"ContainerDied","Data":"e2d441932d3664fed169852c5dbc949c38b98034a862db0b462c545a5951b906"} Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.291547 4772 scope.go:117] "RemoveContainer" containerID="90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.291700 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.297695 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2af13a2-6cfc-469f-a829-b2cecbfd7129","Type":"ContainerStarted","Data":"e3090aafb8ce947930a69231fc5c63e9272c4052f297fd0b9d3de9ccbf57f082"} Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.325896 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.330137 4772 scope.go:117] "RemoveContainer" containerID="2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.333710 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.360433 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:37 crc kubenswrapper[4772]: E1122 12:09:37.361018 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-httpd" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.361054 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-httpd" Nov 22 12:09:37 crc kubenswrapper[4772]: E1122 12:09:37.361092 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-log" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.361101 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-log" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.361451 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-httpd" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.361483 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="108c431a-43ae-438b-9156-0fffab405860" containerName="glance-log" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.363549 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.369366 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.374292 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.399640 4772 scope.go:117] "RemoveContainer" containerID="90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5" Nov 22 12:09:37 crc kubenswrapper[4772]: E1122 12:09:37.405013 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5\": container with ID starting with 90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5 not found: ID does not exist" containerID="90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.405153 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5"} err="failed to get container status \"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5\": rpc error: code = NotFound desc = could not find container \"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5\": container with ID starting with 90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5 not found: ID does not exist" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.405217 4772 scope.go:117] "RemoveContainer" containerID="2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa" Nov 22 12:09:37 crc kubenswrapper[4772]: E1122 12:09:37.405748 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa\": container with ID starting with 2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa not found: ID does not exist" containerID="2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.405776 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa"} err="failed to get container status \"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa\": rpc error: code = NotFound desc = could not find container \"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa\": container with ID starting with 2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa not found: ID does not exist" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.405821 4772 scope.go:117] "RemoveContainer" containerID="90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.406163 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5"} err="failed to get container status \"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5\": rpc error: code = NotFound desc = could not find container \"90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5\": container with ID starting with 90bdbd807fadc030ab936a5741e675701502214537a94f35bac259f0347ef2e5 not found: ID does not exist" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.406191 4772 scope.go:117] "RemoveContainer" containerID="2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.406460 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa"} err="failed to get container status \"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa\": rpc error: code = NotFound desc = could not find container \"2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa\": container with ID starting with 2ed28dcabd6aaf448d0de19e002cc26479783e31cc536969e38970487457c0fa not found: ID does not exist" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.427767 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108c431a-43ae-438b-9156-0fffab405860" path="/var/lib/kubelet/pods/108c431a-43ae-438b-9156-0fffab405860/volumes" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.448432 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.448496 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.448531 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.448578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79shj\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-kube-api-access-79shj\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.448602 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.448820 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.448863 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.550623 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.550686 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.550725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.550795 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79shj\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-kube-api-access-79shj\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.550826 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.550851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.550907 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.551255 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.551350 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.560280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.568864 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.570318 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.570620 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.573335 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79shj\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-kube-api-access-79shj\") pod \"glance-default-internal-api-0\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:09:37 crc kubenswrapper[4772]: I1122 12:09:37.690129 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:38 crc kubenswrapper[4772]: I1122 12:09:38.285384 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:09:38 crc kubenswrapper[4772]: W1122 12:09:38.292485 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c011b9_37e2_47b1_b7fe_ad03217939d3.slice/crio-678e34d7b6889f8aec2c2fb63b1efe51430767e40665f3a4fb3109b32f729ef8 WatchSource:0}: Error finding container 678e34d7b6889f8aec2c2fb63b1efe51430767e40665f3a4fb3109b32f729ef8: Status 404 returned error can't find the container with id 678e34d7b6889f8aec2c2fb63b1efe51430767e40665f3a4fb3109b32f729ef8 Nov 22 12:09:38 crc kubenswrapper[4772]: I1122 12:09:38.307785 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2af13a2-6cfc-469f-a829-b2cecbfd7129","Type":"ContainerStarted","Data":"b57a5ef889b4949e6e2e2507385733a8946bc153b1f2aeeb78f80a9763aa4a59"} Nov 22 12:09:38 crc kubenswrapper[4772]: I1122 12:09:38.309741 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1c011b9-37e2-47b1-b7fe-ad03217939d3","Type":"ContainerStarted","Data":"678e34d7b6889f8aec2c2fb63b1efe51430767e40665f3a4fb3109b32f729ef8"} Nov 22 12:09:38 crc kubenswrapper[4772]: I1122 12:09:38.339207 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.339182863 podStartE2EDuration="3.339182863s" podCreationTimestamp="2025-11-22 12:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:09:38.324621031 +0000 UTC m=+5498.564065525" watchObservedRunningTime="2025-11-22 12:09:38.339182863 +0000 UTC m=+5498.578627357" Nov 22 12:09:39 crc kubenswrapper[4772]: I1122 12:09:39.322468 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1c011b9-37e2-47b1-b7fe-ad03217939d3","Type":"ContainerStarted","Data":"2bf23bb37c70f25de81fd2d4252ec45250ee3aa6798dda98db7d14bc6ba43d5d"} Nov 22 12:09:39 crc kubenswrapper[4772]: I1122 12:09:39.322829 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1c011b9-37e2-47b1-b7fe-ad03217939d3","Type":"ContainerStarted","Data":"1439d7fdb0c31c5001033ba23fdd357d53fda61aa290d48a078dda6a90a89ea2"} Nov 22 12:09:39 crc kubenswrapper[4772]: I1122 12:09:39.343528 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.343510868 podStartE2EDuration="2.343510868s" podCreationTimestamp="2025-11-22 12:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:09:39.337459818 +0000 UTC m=+5499.576904322" watchObservedRunningTime="2025-11-22 12:09:39.343510868 +0000 UTC m=+5499.582955362" Nov 22 12:09:41 crc kubenswrapper[4772]: I1122 12:09:41.875272 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:09:41 crc kubenswrapper[4772]: I1122 12:09:41.986771 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547dcdf95c-t5q9x"] Nov 22 12:09:41 crc kubenswrapper[4772]: I1122 12:09:41.987188 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" podUID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerName="dnsmasq-dns" containerID="cri-o://5d837af06c7071b82640ccdcd58ed1fa44d171b6f36c7bdd6b1c355354d3a2d0" gracePeriod=10 Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.391676 4772 generic.go:334] "Generic (PLEG): container finished" podID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerID="5d837af06c7071b82640ccdcd58ed1fa44d171b6f36c7bdd6b1c355354d3a2d0" exitCode=0 Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.392003 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" event={"ID":"7b3a1cec-0da5-4443-b022-dbdc6fd0da13","Type":"ContainerDied","Data":"5d837af06c7071b82640ccdcd58ed1fa44d171b6f36c7bdd6b1c355354d3a2d0"} Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.521153 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.602285 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-dns-svc\") pod \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.602352 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-sb\") pod \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.602382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-config\") pod \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.602416 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-nb\") pod \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.602475 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmrf\" (UniqueName: \"kubernetes.io/projected/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-kube-api-access-blmrf\") pod \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\" (UID: \"7b3a1cec-0da5-4443-b022-dbdc6fd0da13\") " Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.608196 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-kube-api-access-blmrf" (OuterVolumeSpecName: "kube-api-access-blmrf") pod "7b3a1cec-0da5-4443-b022-dbdc6fd0da13" (UID: "7b3a1cec-0da5-4443-b022-dbdc6fd0da13"). InnerVolumeSpecName "kube-api-access-blmrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.650218 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b3a1cec-0da5-4443-b022-dbdc6fd0da13" (UID: "7b3a1cec-0da5-4443-b022-dbdc6fd0da13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.650277 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b3a1cec-0da5-4443-b022-dbdc6fd0da13" (UID: "7b3a1cec-0da5-4443-b022-dbdc6fd0da13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.650671 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b3a1cec-0da5-4443-b022-dbdc6fd0da13" (UID: "7b3a1cec-0da5-4443-b022-dbdc6fd0da13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.655698 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-config" (OuterVolumeSpecName: "config") pod "7b3a1cec-0da5-4443-b022-dbdc6fd0da13" (UID: "7b3a1cec-0da5-4443-b022-dbdc6fd0da13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.703492 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.703715 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.703798 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.703867 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:42 crc kubenswrapper[4772]: I1122 12:09:42.703926 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmrf\" (UniqueName: \"kubernetes.io/projected/7b3a1cec-0da5-4443-b022-dbdc6fd0da13-kube-api-access-blmrf\") on node \"crc\" DevicePath \"\"" Nov 22 12:09:43 crc kubenswrapper[4772]: I1122 12:09:43.405903 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" event={"ID":"7b3a1cec-0da5-4443-b022-dbdc6fd0da13","Type":"ContainerDied","Data":"f30bed1eeab5a603a90747579b7259d335ee984197d2fb79f76d16784d7d44fd"} Nov 22 12:09:43 crc kubenswrapper[4772]: I1122 12:09:43.406112 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcdf95c-t5q9x" Nov 22 12:09:43 crc kubenswrapper[4772]: I1122 12:09:43.407714 4772 scope.go:117] "RemoveContainer" containerID="5d837af06c7071b82640ccdcd58ed1fa44d171b6f36c7bdd6b1c355354d3a2d0" Nov 22 12:09:43 crc kubenswrapper[4772]: I1122 12:09:43.450576 4772 scope.go:117] "RemoveContainer" containerID="f197d0068aadf6795513d21fae9c11e012aa205344cddf23433331c589f20ed0" Nov 22 12:09:43 crc kubenswrapper[4772]: I1122 12:09:43.465034 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547dcdf95c-t5q9x"] Nov 22 12:09:43 crc kubenswrapper[4772]: I1122 12:09:43.470948 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-547dcdf95c-t5q9x"] Nov 22 12:09:45 crc kubenswrapper[4772]: I1122 12:09:45.425782 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" path="/var/lib/kubelet/pods/7b3a1cec-0da5-4443-b022-dbdc6fd0da13/volumes" Nov 22 12:09:45 crc kubenswrapper[4772]: I1122 12:09:45.655584 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 12:09:45 crc kubenswrapper[4772]: I1122 12:09:45.655952 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 12:09:45 crc kubenswrapper[4772]: I1122 12:09:45.691624 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 12:09:45 crc kubenswrapper[4772]: I1122 12:09:45.704529 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 12:09:46 crc kubenswrapper[4772]: I1122 12:09:46.442602 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 12:09:46 crc kubenswrapper[4772]: I1122 12:09:46.442670 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 12:09:47 crc kubenswrapper[4772]: I1122 12:09:47.691134 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:47 crc kubenswrapper[4772]: I1122 12:09:47.691503 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:47 crc kubenswrapper[4772]: I1122 12:09:47.726905 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:47 crc kubenswrapper[4772]: I1122 12:09:47.730123 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:48 crc kubenswrapper[4772]: I1122 12:09:48.435060 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 12:09:48 crc kubenswrapper[4772]: I1122 12:09:48.439450 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 12:09:48 crc kubenswrapper[4772]: I1122 12:09:48.460993 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:48 crc kubenswrapper[4772]: I1122 12:09:48.461124 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:50 crc kubenswrapper[4772]: I1122 12:09:50.459818 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 12:09:50 crc kubenswrapper[4772]: I1122 12:09:50.483298 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.406104 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wmfbw"] Nov 22 12:10:00 crc kubenswrapper[4772]: E1122 12:10:00.407024 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerName="dnsmasq-dns" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.407041 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerName="dnsmasq-dns" Nov 22 12:10:00 crc kubenswrapper[4772]: E1122 12:10:00.408793 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerName="init" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.408804 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerName="init" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.409124 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3a1cec-0da5-4443-b022-dbdc6fd0da13" containerName="dnsmasq-dns" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.409859 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wmfbw" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.427854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sq8q\" (UniqueName: \"kubernetes.io/projected/01364cc2-bc24-4fb4-b15c-fc8c0f709dc6-kube-api-access-2sq8q\") pod \"placement-db-create-wmfbw\" (UID: \"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6\") " pod="openstack/placement-db-create-wmfbw" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.436621 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wmfbw"] Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.528726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sq8q\" (UniqueName: \"kubernetes.io/projected/01364cc2-bc24-4fb4-b15c-fc8c0f709dc6-kube-api-access-2sq8q\") pod \"placement-db-create-wmfbw\" (UID: \"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6\") " pod="openstack/placement-db-create-wmfbw" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.559578 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sq8q\" (UniqueName: \"kubernetes.io/projected/01364cc2-bc24-4fb4-b15c-fc8c0f709dc6-kube-api-access-2sq8q\") pod \"placement-db-create-wmfbw\" (UID: \"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6\") " pod="openstack/placement-db-create-wmfbw" Nov 22 12:10:00 crc kubenswrapper[4772]: I1122 12:10:00.733788 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wmfbw" Nov 22 12:10:01 crc kubenswrapper[4772]: I1122 12:10:01.213773 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wmfbw"] Nov 22 12:10:01 crc kubenswrapper[4772]: I1122 12:10:01.588203 4772 generic.go:334] "Generic (PLEG): container finished" podID="01364cc2-bc24-4fb4-b15c-fc8c0f709dc6" containerID="90d5a482854ca47383e3ed588f8d8cc1a5d6aef0ee906374e7f90400c2dc5543" exitCode=0 Nov 22 12:10:01 crc kubenswrapper[4772]: I1122 12:10:01.588253 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wmfbw" event={"ID":"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6","Type":"ContainerDied","Data":"90d5a482854ca47383e3ed588f8d8cc1a5d6aef0ee906374e7f90400c2dc5543"} Nov 22 12:10:01 crc kubenswrapper[4772]: I1122 12:10:01.588290 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wmfbw" event={"ID":"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6","Type":"ContainerStarted","Data":"b6fa3653d4482894a057fe5b97d32aa5d13dc1f29e39592096fb7f25c6d7dc04"} Nov 22 12:10:02 crc kubenswrapper[4772]: I1122 12:10:02.962327 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wmfbw" Nov 22 12:10:03 crc kubenswrapper[4772]: I1122 12:10:03.087210 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sq8q\" (UniqueName: \"kubernetes.io/projected/01364cc2-bc24-4fb4-b15c-fc8c0f709dc6-kube-api-access-2sq8q\") pod \"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6\" (UID: \"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6\") " Nov 22 12:10:03 crc kubenswrapper[4772]: I1122 12:10:03.094236 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01364cc2-bc24-4fb4-b15c-fc8c0f709dc6-kube-api-access-2sq8q" (OuterVolumeSpecName: "kube-api-access-2sq8q") pod "01364cc2-bc24-4fb4-b15c-fc8c0f709dc6" (UID: "01364cc2-bc24-4fb4-b15c-fc8c0f709dc6"). InnerVolumeSpecName "kube-api-access-2sq8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:10:03 crc kubenswrapper[4772]: I1122 12:10:03.189671 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sq8q\" (UniqueName: \"kubernetes.io/projected/01364cc2-bc24-4fb4-b15c-fc8c0f709dc6-kube-api-access-2sq8q\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:03 crc kubenswrapper[4772]: I1122 12:10:03.609984 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wmfbw" event={"ID":"01364cc2-bc24-4fb4-b15c-fc8c0f709dc6","Type":"ContainerDied","Data":"b6fa3653d4482894a057fe5b97d32aa5d13dc1f29e39592096fb7f25c6d7dc04"} Nov 22 12:10:03 crc kubenswrapper[4772]: I1122 12:10:03.610063 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6fa3653d4482894a057fe5b97d32aa5d13dc1f29e39592096fb7f25c6d7dc04" Nov 22 12:10:03 crc kubenswrapper[4772]: I1122 12:10:03.610129 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wmfbw" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.533635 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a5e8-account-create-dngf4"] Nov 22 12:10:10 crc kubenswrapper[4772]: E1122 12:10:10.534160 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01364cc2-bc24-4fb4-b15c-fc8c0f709dc6" containerName="mariadb-database-create" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.534174 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="01364cc2-bc24-4fb4-b15c-fc8c0f709dc6" containerName="mariadb-database-create" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.534366 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="01364cc2-bc24-4fb4-b15c-fc8c0f709dc6" containerName="mariadb-database-create" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.535083 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a5e8-account-create-dngf4" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.538616 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.568388 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a5e8-account-create-dngf4"] Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.640144 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvcn\" (UniqueName: \"kubernetes.io/projected/9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd-kube-api-access-ckvcn\") pod \"placement-a5e8-account-create-dngf4\" (UID: \"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd\") " pod="openstack/placement-a5e8-account-create-dngf4" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.742119 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckvcn\" (UniqueName: \"kubernetes.io/projected/9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd-kube-api-access-ckvcn\") pod \"placement-a5e8-account-create-dngf4\" (UID: \"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd\") " pod="openstack/placement-a5e8-account-create-dngf4" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.766192 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckvcn\" (UniqueName: \"kubernetes.io/projected/9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd-kube-api-access-ckvcn\") pod \"placement-a5e8-account-create-dngf4\" (UID: \"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd\") " pod="openstack/placement-a5e8-account-create-dngf4" Nov 22 12:10:10 crc kubenswrapper[4772]: I1122 12:10:10.873277 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a5e8-account-create-dngf4" Nov 22 12:10:11 crc kubenswrapper[4772]: I1122 12:10:11.310120 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a5e8-account-create-dngf4"] Nov 22 12:10:11 crc kubenswrapper[4772]: I1122 12:10:11.470329 4772 scope.go:117] "RemoveContainer" containerID="e504896c55d2bcff4765072365159e6f580a1676f747eb9ba40b74b46a50f898" Nov 22 12:10:11 crc kubenswrapper[4772]: I1122 12:10:11.688304 4772 generic.go:334] "Generic (PLEG): container finished" podID="9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd" containerID="6f535f11a04f83e03a4384ff6d7e47cddfbf059392cf9d62cd6bec4554e0776c" exitCode=0 Nov 22 12:10:11 crc kubenswrapper[4772]: I1122 12:10:11.688358 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a5e8-account-create-dngf4" event={"ID":"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd","Type":"ContainerDied","Data":"6f535f11a04f83e03a4384ff6d7e47cddfbf059392cf9d62cd6bec4554e0776c"} Nov 22 12:10:11 crc kubenswrapper[4772]: I1122 12:10:11.688386 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a5e8-account-create-dngf4" event={"ID":"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd","Type":"ContainerStarted","Data":"3135b7274c619281043e00558b016dc9104bff21e0bf01ec07306571fd67cbef"} Nov 22 12:10:13 crc kubenswrapper[4772]: I1122 12:10:13.049182 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a5e8-account-create-dngf4" Nov 22 12:10:13 crc kubenswrapper[4772]: I1122 12:10:13.222095 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckvcn\" (UniqueName: \"kubernetes.io/projected/9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd-kube-api-access-ckvcn\") pod \"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd\" (UID: \"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd\") " Nov 22 12:10:13 crc kubenswrapper[4772]: I1122 12:10:13.242077 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd-kube-api-access-ckvcn" (OuterVolumeSpecName: "kube-api-access-ckvcn") pod "9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd" (UID: "9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd"). InnerVolumeSpecName "kube-api-access-ckvcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:10:13 crc kubenswrapper[4772]: I1122 12:10:13.325137 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckvcn\" (UniqueName: \"kubernetes.io/projected/9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd-kube-api-access-ckvcn\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:13 crc kubenswrapper[4772]: I1122 12:10:13.706613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a5e8-account-create-dngf4" event={"ID":"9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd","Type":"ContainerDied","Data":"3135b7274c619281043e00558b016dc9104bff21e0bf01ec07306571fd67cbef"} Nov 22 12:10:13 crc kubenswrapper[4772]: I1122 12:10:13.706651 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3135b7274c619281043e00558b016dc9104bff21e0bf01ec07306571fd67cbef" Nov 22 12:10:13 crc kubenswrapper[4772]: I1122 12:10:13.706655 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a5e8-account-create-dngf4" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.829262 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-z5g94"] Nov 22 12:10:15 crc kubenswrapper[4772]: E1122 12:10:15.830191 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd" containerName="mariadb-account-create" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.830215 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd" containerName="mariadb-account-create" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.830440 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd" containerName="mariadb-account-create" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.831315 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.833999 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lg4jw" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.834240 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.834414 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.840290 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z5g94"] Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.875253 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dc96bcddc-hfvg4"] Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.889688 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-config-data\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.889917 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-scripts\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.890012 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-combined-ca-bundle\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.890159 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2qv\" (UniqueName: \"kubernetes.io/projected/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-kube-api-access-md2qv\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.890249 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-logs\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.891581 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc96bcddc-hfvg4"] Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.891753 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992017 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992085 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-scripts\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992124 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqr2d\" (UniqueName: \"kubernetes.io/projected/3f3d2fee-4291-428c-b664-888afddd412c-kube-api-access-hqr2d\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992141 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-combined-ca-bundle\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992176 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-dns-svc\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992215 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-config\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992276 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2qv\" (UniqueName: \"kubernetes.io/projected/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-kube-api-access-md2qv\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992304 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-logs\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992326 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.992340 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-config-data\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:15 crc kubenswrapper[4772]: I1122 12:10:15.997195 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-logs\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.003004 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-combined-ca-bundle\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.010392 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-scripts\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.013879 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-config-data\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.016351 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2qv\" (UniqueName: \"kubernetes.io/projected/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-kube-api-access-md2qv\") pod \"placement-db-sync-z5g94\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.094431 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.094537 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.094580 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqr2d\" (UniqueName: \"kubernetes.io/projected/3f3d2fee-4291-428c-b664-888afddd412c-kube-api-access-hqr2d\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.094611 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-dns-svc\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.094635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-config\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.095520 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.095553 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-config\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.095936 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-dns-svc\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.096195 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.112365 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqr2d\" (UniqueName: \"kubernetes.io/projected/3f3d2fee-4291-428c-b664-888afddd412c-kube-api-access-hqr2d\") pod \"dnsmasq-dns-6dc96bcddc-hfvg4\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.191271 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.212428 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.642467 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-z5g94"] Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.701786 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc96bcddc-hfvg4"] Nov 22 12:10:16 crc kubenswrapper[4772]: W1122 12:10:16.710105 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f3d2fee_4291_428c_b664_888afddd412c.slice/crio-ceaff874c7dc2c1ce71b3f76adbe90dbf35099d51e27af44cdbed237ba148b7d WatchSource:0}: Error finding container ceaff874c7dc2c1ce71b3f76adbe90dbf35099d51e27af44cdbed237ba148b7d: Status 404 returned error can't find the container with id ceaff874c7dc2c1ce71b3f76adbe90dbf35099d51e27af44cdbed237ba148b7d Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.733269 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z5g94" event={"ID":"003f5d26-27bc-427b-b2a9-10a4d1b6ffba","Type":"ContainerStarted","Data":"1fb59b28da40b304a9046f0d89a575ccf196f056a93f0e8a4f35d58694ec1825"} Nov 22 12:10:16 crc kubenswrapper[4772]: I1122 12:10:16.736118 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" event={"ID":"3f3d2fee-4291-428c-b664-888afddd412c","Type":"ContainerStarted","Data":"ceaff874c7dc2c1ce71b3f76adbe90dbf35099d51e27af44cdbed237ba148b7d"} Nov 22 12:10:17 crc kubenswrapper[4772]: I1122 12:10:17.746521 4772 generic.go:334] "Generic (PLEG): container finished" podID="3f3d2fee-4291-428c-b664-888afddd412c" containerID="bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01" exitCode=0 Nov 22 12:10:17 crc kubenswrapper[4772]: I1122 12:10:17.746731 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" event={"ID":"3f3d2fee-4291-428c-b664-888afddd412c","Type":"ContainerDied","Data":"bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01"} Nov 22 12:10:17 crc kubenswrapper[4772]: I1122 12:10:17.756571 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z5g94" event={"ID":"003f5d26-27bc-427b-b2a9-10a4d1b6ffba","Type":"ContainerStarted","Data":"da2c7526f15aa7a500e0d472faf4e9cc421c5444307180cded9811293a234f1c"} Nov 22 12:10:17 crc kubenswrapper[4772]: I1122 12:10:17.797219 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-z5g94" podStartSLOduration=2.797195582 podStartE2EDuration="2.797195582s" podCreationTimestamp="2025-11-22 12:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:10:17.793384587 +0000 UTC m=+5538.032829081" watchObservedRunningTime="2025-11-22 12:10:17.797195582 +0000 UTC m=+5538.036640076" Nov 22 12:10:18 crc kubenswrapper[4772]: I1122 12:10:18.771700 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" event={"ID":"3f3d2fee-4291-428c-b664-888afddd412c","Type":"ContainerStarted","Data":"59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288"} Nov 22 12:10:18 crc kubenswrapper[4772]: I1122 12:10:18.772260 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:18 crc kubenswrapper[4772]: I1122 12:10:18.773504 4772 generic.go:334] "Generic (PLEG): container finished" podID="003f5d26-27bc-427b-b2a9-10a4d1b6ffba" containerID="da2c7526f15aa7a500e0d472faf4e9cc421c5444307180cded9811293a234f1c" exitCode=0 Nov 22 12:10:18 crc kubenswrapper[4772]: I1122 12:10:18.773560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z5g94" event={"ID":"003f5d26-27bc-427b-b2a9-10a4d1b6ffba","Type":"ContainerDied","Data":"da2c7526f15aa7a500e0d472faf4e9cc421c5444307180cded9811293a234f1c"} Nov 22 12:10:18 crc kubenswrapper[4772]: I1122 12:10:18.800879 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" podStartSLOduration=3.8008583 podStartE2EDuration="3.8008583s" podCreationTimestamp="2025-11-22 12:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:10:18.797946528 +0000 UTC m=+5539.037391042" watchObservedRunningTime="2025-11-22 12:10:18.8008583 +0000 UTC m=+5539.040302804" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.183414 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.275895 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-combined-ca-bundle\") pod \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.276127 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md2qv\" (UniqueName: \"kubernetes.io/projected/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-kube-api-access-md2qv\") pod \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.276177 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-logs\") pod \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.276269 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-scripts\") pod \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.276321 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-config-data\") pod \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\" (UID: \"003f5d26-27bc-427b-b2a9-10a4d1b6ffba\") " Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.276676 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-logs" (OuterVolumeSpecName: "logs") pod "003f5d26-27bc-427b-b2a9-10a4d1b6ffba" (UID: "003f5d26-27bc-427b-b2a9-10a4d1b6ffba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.276987 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.282113 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-scripts" (OuterVolumeSpecName: "scripts") pod "003f5d26-27bc-427b-b2a9-10a4d1b6ffba" (UID: "003f5d26-27bc-427b-b2a9-10a4d1b6ffba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.282169 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-kube-api-access-md2qv" (OuterVolumeSpecName: "kube-api-access-md2qv") pod "003f5d26-27bc-427b-b2a9-10a4d1b6ffba" (UID: "003f5d26-27bc-427b-b2a9-10a4d1b6ffba"). InnerVolumeSpecName "kube-api-access-md2qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.308542 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-config-data" (OuterVolumeSpecName: "config-data") pod "003f5d26-27bc-427b-b2a9-10a4d1b6ffba" (UID: "003f5d26-27bc-427b-b2a9-10a4d1b6ffba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.321679 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "003f5d26-27bc-427b-b2a9-10a4d1b6ffba" (UID: "003f5d26-27bc-427b-b2a9-10a4d1b6ffba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.378900 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.379044 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md2qv\" (UniqueName: \"kubernetes.io/projected/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-kube-api-access-md2qv\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.379184 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.379246 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003f5d26-27bc-427b-b2a9-10a4d1b6ffba-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.797371 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-z5g94" event={"ID":"003f5d26-27bc-427b-b2a9-10a4d1b6ffba","Type":"ContainerDied","Data":"1fb59b28da40b304a9046f0d89a575ccf196f056a93f0e8a4f35d58694ec1825"} Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.797742 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fb59b28da40b304a9046f0d89a575ccf196f056a93f0e8a4f35d58694ec1825" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.797430 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-z5g94" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.898716 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5858b9d67b-vdgcb"] Nov 22 12:10:20 crc kubenswrapper[4772]: E1122 12:10:20.899290 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003f5d26-27bc-427b-b2a9-10a4d1b6ffba" containerName="placement-db-sync" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.899316 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="003f5d26-27bc-427b-b2a9-10a4d1b6ffba" containerName="placement-db-sync" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.899573 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="003f5d26-27bc-427b-b2a9-10a4d1b6ffba" containerName="placement-db-sync" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.900777 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.903370 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.907623 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.907968 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lg4jw" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.922585 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5858b9d67b-vdgcb"] Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.992305 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-scripts\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.992353 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43446080-f192-4ecc-8261-3338c8da8d7c-logs\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.992418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88sz\" (UniqueName: \"kubernetes.io/projected/43446080-f192-4ecc-8261-3338c8da8d7c-kube-api-access-h88sz\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.992612 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-config-data\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:20 crc kubenswrapper[4772]: I1122 12:10:20.992926 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-combined-ca-bundle\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.095015 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-config-data\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.095209 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-combined-ca-bundle\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.095328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-scripts\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.095398 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43446080-f192-4ecc-8261-3338c8da8d7c-logs\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.095539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h88sz\" (UniqueName: \"kubernetes.io/projected/43446080-f192-4ecc-8261-3338c8da8d7c-kube-api-access-h88sz\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.095999 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43446080-f192-4ecc-8261-3338c8da8d7c-logs\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.100082 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-config-data\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.100994 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-scripts\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.101013 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43446080-f192-4ecc-8261-3338c8da8d7c-combined-ca-bundle\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.113676 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h88sz\" (UniqueName: \"kubernetes.io/projected/43446080-f192-4ecc-8261-3338c8da8d7c-kube-api-access-h88sz\") pod \"placement-5858b9d67b-vdgcb\" (UID: \"43446080-f192-4ecc-8261-3338c8da8d7c\") " pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.224971 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.721579 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5858b9d67b-vdgcb"] Nov 22 12:10:21 crc kubenswrapper[4772]: W1122 12:10:21.729219 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43446080_f192_4ecc_8261_3338c8da8d7c.slice/crio-c745f56bf5791e61093747dc353492f408566f83e3bf28f8d8bf1f6593c8fd48 WatchSource:0}: Error finding container c745f56bf5791e61093747dc353492f408566f83e3bf28f8d8bf1f6593c8fd48: Status 404 returned error can't find the container with id c745f56bf5791e61093747dc353492f408566f83e3bf28f8d8bf1f6593c8fd48 Nov 22 12:10:21 crc kubenswrapper[4772]: I1122 12:10:21.807510 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5858b9d67b-vdgcb" event={"ID":"43446080-f192-4ecc-8261-3338c8da8d7c","Type":"ContainerStarted","Data":"c745f56bf5791e61093747dc353492f408566f83e3bf28f8d8bf1f6593c8fd48"} Nov 22 12:10:22 crc kubenswrapper[4772]: I1122 12:10:22.817645 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5858b9d67b-vdgcb" event={"ID":"43446080-f192-4ecc-8261-3338c8da8d7c","Type":"ContainerStarted","Data":"8677dd6b49a365ae3ed35ae4edcafe7d07d3b30964d57516973e1f87a9d3b84a"} Nov 22 12:10:22 crc kubenswrapper[4772]: I1122 12:10:22.818263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5858b9d67b-vdgcb" event={"ID":"43446080-f192-4ecc-8261-3338c8da8d7c","Type":"ContainerStarted","Data":"0825a2c989ce6ecb922ad1644b66ded8850a5287d9c0c39e7beb6fe71526c4fe"} Nov 22 12:10:22 crc kubenswrapper[4772]: I1122 12:10:22.818286 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:22 crc kubenswrapper[4772]: I1122 12:10:22.818297 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:22 crc kubenswrapper[4772]: I1122 12:10:22.842880 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5858b9d67b-vdgcb" podStartSLOduration=2.842831488 podStartE2EDuration="2.842831488s" podCreationTimestamp="2025-11-22 12:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:10:22.834624313 +0000 UTC m=+5543.074068807" watchObservedRunningTime="2025-11-22 12:10:22.842831488 +0000 UTC m=+5543.082276002" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.214485 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.335631 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cf987597-m5pcd"] Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.335921 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" podUID="920c406d-db29-4d20-b46b-9da06425cf90" containerName="dnsmasq-dns" containerID="cri-o://6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a" gracePeriod=10 Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.836884 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.865275 4772 generic.go:334] "Generic (PLEG): container finished" podID="920c406d-db29-4d20-b46b-9da06425cf90" containerID="6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a" exitCode=0 Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.865333 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" event={"ID":"920c406d-db29-4d20-b46b-9da06425cf90","Type":"ContainerDied","Data":"6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a"} Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.865366 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.865410 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cf987597-m5pcd" event={"ID":"920c406d-db29-4d20-b46b-9da06425cf90","Type":"ContainerDied","Data":"0ad624239e3e07e0a3766a3cf1c2e834df0463be7e9537f57b3da20ab6c6d059"} Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.865440 4772 scope.go:117] "RemoveContainer" containerID="6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.897803 4772 scope.go:117] "RemoveContainer" containerID="9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.935945 4772 scope.go:117] "RemoveContainer" containerID="6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.936573 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-nb\") pod \"920c406d-db29-4d20-b46b-9da06425cf90\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " Nov 22 12:10:26 crc kubenswrapper[4772]: E1122 12:10:26.936704 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a\": container with ID starting with 6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a not found: ID does not exist" containerID="6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.936776 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a"} err="failed to get container status \"6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a\": rpc error: code = NotFound desc = could not find container \"6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a\": container with ID starting with 6318a0b8d746cbe614e1f02a3b28bc144e4e756aa31d9e8d9dcbf0974e28288a not found: ID does not exist" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.936819 4772 scope.go:117] "RemoveContainer" containerID="9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.936838 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-config\") pod \"920c406d-db29-4d20-b46b-9da06425cf90\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.936938 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-dns-svc\") pod \"920c406d-db29-4d20-b46b-9da06425cf90\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.937081 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-sb\") pod \"920c406d-db29-4d20-b46b-9da06425cf90\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.937152 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmpcv\" (UniqueName: \"kubernetes.io/projected/920c406d-db29-4d20-b46b-9da06425cf90-kube-api-access-xmpcv\") pod \"920c406d-db29-4d20-b46b-9da06425cf90\" (UID: \"920c406d-db29-4d20-b46b-9da06425cf90\") " Nov 22 12:10:26 crc kubenswrapper[4772]: E1122 12:10:26.938129 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f\": container with ID starting with 9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f not found: ID does not exist" containerID="9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.938201 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f"} err="failed to get container status \"9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f\": rpc error: code = NotFound desc = could not find container \"9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f\": container with ID starting with 9f3f759b6bbf5538436b10990dd33e302095dcecaaeb247c5295b6d6b7b0000f not found: ID does not exist" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.943618 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920c406d-db29-4d20-b46b-9da06425cf90-kube-api-access-xmpcv" (OuterVolumeSpecName: "kube-api-access-xmpcv") pod "920c406d-db29-4d20-b46b-9da06425cf90" (UID: "920c406d-db29-4d20-b46b-9da06425cf90"). InnerVolumeSpecName "kube-api-access-xmpcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.992628 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "920c406d-db29-4d20-b46b-9da06425cf90" (UID: "920c406d-db29-4d20-b46b-9da06425cf90"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.993344 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-config" (OuterVolumeSpecName: "config") pod "920c406d-db29-4d20-b46b-9da06425cf90" (UID: "920c406d-db29-4d20-b46b-9da06425cf90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:10:26 crc kubenswrapper[4772]: I1122 12:10:26.997783 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "920c406d-db29-4d20-b46b-9da06425cf90" (UID: "920c406d-db29-4d20-b46b-9da06425cf90"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.009599 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "920c406d-db29-4d20-b46b-9da06425cf90" (UID: "920c406d-db29-4d20-b46b-9da06425cf90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.039339 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.039385 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.039397 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.039410 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920c406d-db29-4d20-b46b-9da06425cf90-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.039423 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmpcv\" (UniqueName: \"kubernetes.io/projected/920c406d-db29-4d20-b46b-9da06425cf90-kube-api-access-xmpcv\") on node \"crc\" DevicePath \"\"" Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.219173 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cf987597-m5pcd"] Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.226706 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77cf987597-m5pcd"] Nov 22 12:10:27 crc kubenswrapper[4772]: I1122 12:10:27.426442 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920c406d-db29-4d20-b46b-9da06425cf90" path="/var/lib/kubelet/pods/920c406d-db29-4d20-b46b-9da06425cf90/volumes" Nov 22 12:10:31 crc kubenswrapper[4772]: I1122 12:10:31.533414 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:10:31 crc kubenswrapper[4772]: I1122 12:10:31.534491 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:10:52 crc kubenswrapper[4772]: I1122 12:10:52.271451 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:10:52 crc kubenswrapper[4772]: I1122 12:10:52.274813 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5858b9d67b-vdgcb" Nov 22 12:11:01 crc kubenswrapper[4772]: I1122 12:11:01.533839 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:11:01 crc kubenswrapper[4772]: I1122 12:11:01.534929 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:11:11 crc kubenswrapper[4772]: I1122 12:11:11.556806 4772 scope.go:117] "RemoveContainer" containerID="b4b8758e0d60fae224833126fa7861be41daeab8f3f7075378ebb7a58bd59b76" Nov 22 12:11:11 crc kubenswrapper[4772]: I1122 12:11:11.614847 4772 scope.go:117] "RemoveContainer" containerID="361cd88685d6838aeb310ac0b42d962d77b470a17c9c916be0cba6f3e164baec" Nov 22 12:11:11 crc kubenswrapper[4772]: I1122 12:11:11.665410 4772 scope.go:117] "RemoveContainer" containerID="2fc17e59d25f805c8a89ccc3dd0361a54be00af89cfae8b73bde753e350b467a" Nov 22 12:11:14 crc kubenswrapper[4772]: I1122 12:11:14.978117 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hlhl9"] Nov 22 12:11:14 crc kubenswrapper[4772]: E1122 12:11:14.978719 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920c406d-db29-4d20-b46b-9da06425cf90" containerName="dnsmasq-dns" Nov 22 12:11:14 crc kubenswrapper[4772]: I1122 12:11:14.978734 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="920c406d-db29-4d20-b46b-9da06425cf90" containerName="dnsmasq-dns" Nov 22 12:11:14 crc kubenswrapper[4772]: E1122 12:11:14.978755 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920c406d-db29-4d20-b46b-9da06425cf90" containerName="init" Nov 22 12:11:14 crc kubenswrapper[4772]: I1122 12:11:14.978762 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="920c406d-db29-4d20-b46b-9da06425cf90" containerName="init" Nov 22 12:11:14 crc kubenswrapper[4772]: I1122 12:11:14.978934 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="920c406d-db29-4d20-b46b-9da06425cf90" containerName="dnsmasq-dns" Nov 22 12:11:14 crc kubenswrapper[4772]: I1122 12:11:14.979600 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hlhl9" Nov 22 12:11:14 crc kubenswrapper[4772]: I1122 12:11:14.999208 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hlhl9"] Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.067838 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mfq9z"] Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.069066 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mfq9z" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.082611 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mfq9z"] Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.138921 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfwhk\" (UniqueName: \"kubernetes.io/projected/03077780-f463-4f53-b015-90a5f8f956d9-kube-api-access-hfwhk\") pod \"nova-cell0-db-create-mfq9z\" (UID: \"03077780-f463-4f53-b015-90a5f8f956d9\") " pod="openstack/nova-cell0-db-create-mfq9z" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.139039 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnfkg\" (UniqueName: \"kubernetes.io/projected/82313d39-be83-4193-b2ef-fc54a9f6ceae-kube-api-access-jnfkg\") pod \"nova-api-db-create-hlhl9\" (UID: \"82313d39-be83-4193-b2ef-fc54a9f6ceae\") " pod="openstack/nova-api-db-create-hlhl9" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.240900 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfwhk\" (UniqueName: \"kubernetes.io/projected/03077780-f463-4f53-b015-90a5f8f956d9-kube-api-access-hfwhk\") pod \"nova-cell0-db-create-mfq9z\" (UID: \"03077780-f463-4f53-b015-90a5f8f956d9\") " pod="openstack/nova-cell0-db-create-mfq9z" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.241014 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnfkg\" (UniqueName: \"kubernetes.io/projected/82313d39-be83-4193-b2ef-fc54a9f6ceae-kube-api-access-jnfkg\") pod \"nova-api-db-create-hlhl9\" (UID: \"82313d39-be83-4193-b2ef-fc54a9f6ceae\") " pod="openstack/nova-api-db-create-hlhl9" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.273440 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xdm5h"] Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.274992 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnfkg\" (UniqueName: \"kubernetes.io/projected/82313d39-be83-4193-b2ef-fc54a9f6ceae-kube-api-access-jnfkg\") pod \"nova-api-db-create-hlhl9\" (UID: \"82313d39-be83-4193-b2ef-fc54a9f6ceae\") " pod="openstack/nova-api-db-create-hlhl9" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.275540 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdm5h" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.277128 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfwhk\" (UniqueName: \"kubernetes.io/projected/03077780-f463-4f53-b015-90a5f8f956d9-kube-api-access-hfwhk\") pod \"nova-cell0-db-create-mfq9z\" (UID: \"03077780-f463-4f53-b015-90a5f8f956d9\") " pod="openstack/nova-cell0-db-create-mfq9z" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.301704 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hlhl9" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.304302 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xdm5h"] Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.344123 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcfk\" (UniqueName: \"kubernetes.io/projected/41815add-6907-4973-a580-efa29ae5d5c9-kube-api-access-wmcfk\") pod \"nova-cell1-db-create-xdm5h\" (UID: \"41815add-6907-4973-a580-efa29ae5d5c9\") " pod="openstack/nova-cell1-db-create-xdm5h" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.386577 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mfq9z" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.448564 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmcfk\" (UniqueName: \"kubernetes.io/projected/41815add-6907-4973-a580-efa29ae5d5c9-kube-api-access-wmcfk\") pod \"nova-cell1-db-create-xdm5h\" (UID: \"41815add-6907-4973-a580-efa29ae5d5c9\") " pod="openstack/nova-cell1-db-create-xdm5h" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.471737 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmcfk\" (UniqueName: \"kubernetes.io/projected/41815add-6907-4973-a580-efa29ae5d5c9-kube-api-access-wmcfk\") pod \"nova-cell1-db-create-xdm5h\" (UID: \"41815add-6907-4973-a580-efa29ae5d5c9\") " pod="openstack/nova-cell1-db-create-xdm5h" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.771325 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdm5h" Nov 22 12:11:15 crc kubenswrapper[4772]: I1122 12:11:15.953933 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hlhl9"] Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.120125 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mfq9z"] Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.304207 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xdm5h"] Nov 22 12:11:16 crc kubenswrapper[4772]: W1122 12:11:16.310161 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41815add_6907_4973_a580_efa29ae5d5c9.slice/crio-8ddab5038c61ed70b89b31aad2872088c953270f1a154f8ed200d987d20c1675 WatchSource:0}: Error finding container 8ddab5038c61ed70b89b31aad2872088c953270f1a154f8ed200d987d20c1675: Status 404 returned error can't find the container with id 8ddab5038c61ed70b89b31aad2872088c953270f1a154f8ed200d987d20c1675 Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.428110 4772 generic.go:334] "Generic (PLEG): container finished" podID="03077780-f463-4f53-b015-90a5f8f956d9" containerID="f4e1a0c1ca9b92ac0ed967aa10f58c252b6b67a351469ed7ac2d0c9d35d10d56" exitCode=0 Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.428472 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mfq9z" event={"ID":"03077780-f463-4f53-b015-90a5f8f956d9","Type":"ContainerDied","Data":"f4e1a0c1ca9b92ac0ed967aa10f58c252b6b67a351469ed7ac2d0c9d35d10d56"} Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.428564 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mfq9z" event={"ID":"03077780-f463-4f53-b015-90a5f8f956d9","Type":"ContainerStarted","Data":"afde02eb0aae2194b58754f4da2088f3f479bdeef81d894a8fb76bfffbf413b0"} Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.430429 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdm5h" event={"ID":"41815add-6907-4973-a580-efa29ae5d5c9","Type":"ContainerStarted","Data":"8ddab5038c61ed70b89b31aad2872088c953270f1a154f8ed200d987d20c1675"} Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.433985 4772 generic.go:334] "Generic (PLEG): container finished" podID="82313d39-be83-4193-b2ef-fc54a9f6ceae" containerID="f1d5509ce400d11ad56b5e23986f1de49d1d1110bcd4949705fe7f5db456edf4" exitCode=0 Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.434066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hlhl9" event={"ID":"82313d39-be83-4193-b2ef-fc54a9f6ceae","Type":"ContainerDied","Data":"f1d5509ce400d11ad56b5e23986f1de49d1d1110bcd4949705fe7f5db456edf4"} Nov 22 12:11:16 crc kubenswrapper[4772]: I1122 12:11:16.434104 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hlhl9" event={"ID":"82313d39-be83-4193-b2ef-fc54a9f6ceae","Type":"ContainerStarted","Data":"1a2e93dac1d09c5c44bf2e23d39eb8eb43ed4a09696f277aab7fb768ee874a28"} Nov 22 12:11:17 crc kubenswrapper[4772]: I1122 12:11:17.445069 4772 generic.go:334] "Generic (PLEG): container finished" podID="41815add-6907-4973-a580-efa29ae5d5c9" containerID="587f321ffc74c7bea714d920af2c48f5b89382eb202c58b246a83ef042b21653" exitCode=0 Nov 22 12:11:17 crc kubenswrapper[4772]: I1122 12:11:17.445163 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdm5h" event={"ID":"41815add-6907-4973-a580-efa29ae5d5c9","Type":"ContainerDied","Data":"587f321ffc74c7bea714d920af2c48f5b89382eb202c58b246a83ef042b21653"} Nov 22 12:11:17 crc kubenswrapper[4772]: I1122 12:11:17.883607 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hlhl9" Nov 22 12:11:17 crc kubenswrapper[4772]: I1122 12:11:17.890521 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mfq9z" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.022864 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfwhk\" (UniqueName: \"kubernetes.io/projected/03077780-f463-4f53-b015-90a5f8f956d9-kube-api-access-hfwhk\") pod \"03077780-f463-4f53-b015-90a5f8f956d9\" (UID: \"03077780-f463-4f53-b015-90a5f8f956d9\") " Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.022929 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnfkg\" (UniqueName: \"kubernetes.io/projected/82313d39-be83-4193-b2ef-fc54a9f6ceae-kube-api-access-jnfkg\") pod \"82313d39-be83-4193-b2ef-fc54a9f6ceae\" (UID: \"82313d39-be83-4193-b2ef-fc54a9f6ceae\") " Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.029770 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82313d39-be83-4193-b2ef-fc54a9f6ceae-kube-api-access-jnfkg" (OuterVolumeSpecName: "kube-api-access-jnfkg") pod "82313d39-be83-4193-b2ef-fc54a9f6ceae" (UID: "82313d39-be83-4193-b2ef-fc54a9f6ceae"). InnerVolumeSpecName "kube-api-access-jnfkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.039423 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03077780-f463-4f53-b015-90a5f8f956d9-kube-api-access-hfwhk" (OuterVolumeSpecName: "kube-api-access-hfwhk") pod "03077780-f463-4f53-b015-90a5f8f956d9" (UID: "03077780-f463-4f53-b015-90a5f8f956d9"). InnerVolumeSpecName "kube-api-access-hfwhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.125309 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnfkg\" (UniqueName: \"kubernetes.io/projected/82313d39-be83-4193-b2ef-fc54a9f6ceae-kube-api-access-jnfkg\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.125361 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfwhk\" (UniqueName: \"kubernetes.io/projected/03077780-f463-4f53-b015-90a5f8f956d9-kube-api-access-hfwhk\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.460347 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mfq9z" event={"ID":"03077780-f463-4f53-b015-90a5f8f956d9","Type":"ContainerDied","Data":"afde02eb0aae2194b58754f4da2088f3f479bdeef81d894a8fb76bfffbf413b0"} Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.460398 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afde02eb0aae2194b58754f4da2088f3f479bdeef81d894a8fb76bfffbf413b0" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.460407 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mfq9z" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.462920 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hlhl9" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.462959 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hlhl9" event={"ID":"82313d39-be83-4193-b2ef-fc54a9f6ceae","Type":"ContainerDied","Data":"1a2e93dac1d09c5c44bf2e23d39eb8eb43ed4a09696f277aab7fb768ee874a28"} Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.463033 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2e93dac1d09c5c44bf2e23d39eb8eb43ed4a09696f277aab7fb768ee874a28" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.824540 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdm5h" Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.953132 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmcfk\" (UniqueName: \"kubernetes.io/projected/41815add-6907-4973-a580-efa29ae5d5c9-kube-api-access-wmcfk\") pod \"41815add-6907-4973-a580-efa29ae5d5c9\" (UID: \"41815add-6907-4973-a580-efa29ae5d5c9\") " Nov 22 12:11:18 crc kubenswrapper[4772]: I1122 12:11:18.963288 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41815add-6907-4973-a580-efa29ae5d5c9-kube-api-access-wmcfk" (OuterVolumeSpecName: "kube-api-access-wmcfk") pod "41815add-6907-4973-a580-efa29ae5d5c9" (UID: "41815add-6907-4973-a580-efa29ae5d5c9"). InnerVolumeSpecName "kube-api-access-wmcfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:19 crc kubenswrapper[4772]: I1122 12:11:19.055683 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmcfk\" (UniqueName: \"kubernetes.io/projected/41815add-6907-4973-a580-efa29ae5d5c9-kube-api-access-wmcfk\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:19 crc kubenswrapper[4772]: I1122 12:11:19.487239 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdm5h" event={"ID":"41815add-6907-4973-a580-efa29ae5d5c9","Type":"ContainerDied","Data":"8ddab5038c61ed70b89b31aad2872088c953270f1a154f8ed200d987d20c1675"} Nov 22 12:11:19 crc kubenswrapper[4772]: I1122 12:11:19.487340 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ddab5038c61ed70b89b31aad2872088c953270f1a154f8ed200d987d20c1675" Nov 22 12:11:19 crc kubenswrapper[4772]: I1122 12:11:19.487514 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdm5h" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.056235 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7f7a-account-create-kcdng"] Nov 22 12:11:25 crc kubenswrapper[4772]: E1122 12:11:25.057312 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03077780-f463-4f53-b015-90a5f8f956d9" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.057332 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="03077780-f463-4f53-b015-90a5f8f956d9" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: E1122 12:11:25.057355 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41815add-6907-4973-a580-efa29ae5d5c9" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.057368 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="41815add-6907-4973-a580-efa29ae5d5c9" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: E1122 12:11:25.057423 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82313d39-be83-4193-b2ef-fc54a9f6ceae" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.057433 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="82313d39-be83-4193-b2ef-fc54a9f6ceae" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.057729 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="03077780-f463-4f53-b015-90a5f8f956d9" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.057762 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="41815add-6907-4973-a580-efa29ae5d5c9" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.057789 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="82313d39-be83-4193-b2ef-fc54a9f6ceae" containerName="mariadb-database-create" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.058833 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f7a-account-create-kcdng" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.060890 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.063731 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7f7a-account-create-kcdng"] Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.169479 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hh4g\" (UniqueName: \"kubernetes.io/projected/f441febd-8436-4440-9e7b-9514e36d0c1d-kube-api-access-8hh4g\") pod \"nova-api-7f7a-account-create-kcdng\" (UID: \"f441febd-8436-4440-9e7b-9514e36d0c1d\") " pod="openstack/nova-api-7f7a-account-create-kcdng" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.207607 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0005-account-create-js4rt"] Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.208984 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0005-account-create-js4rt" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.217495 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0005-account-create-js4rt"] Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.252132 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.273509 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hh4g\" (UniqueName: \"kubernetes.io/projected/f441febd-8436-4440-9e7b-9514e36d0c1d-kube-api-access-8hh4g\") pod \"nova-api-7f7a-account-create-kcdng\" (UID: \"f441febd-8436-4440-9e7b-9514e36d0c1d\") " pod="openstack/nova-api-7f7a-account-create-kcdng" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.301953 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hh4g\" (UniqueName: \"kubernetes.io/projected/f441febd-8436-4440-9e7b-9514e36d0c1d-kube-api-access-8hh4g\") pod \"nova-api-7f7a-account-create-kcdng\" (UID: \"f441febd-8436-4440-9e7b-9514e36d0c1d\") " pod="openstack/nova-api-7f7a-account-create-kcdng" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.374867 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnjg\" (UniqueName: \"kubernetes.io/projected/22ccc1a5-522d-4cf2-94d0-7f1122d28a33-kube-api-access-5qnjg\") pod \"nova-cell0-0005-account-create-js4rt\" (UID: \"22ccc1a5-522d-4cf2-94d0-7f1122d28a33\") " pod="openstack/nova-cell0-0005-account-create-js4rt" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.385575 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f7a-account-create-kcdng" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.409377 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b3a9-account-create-jk7cb"] Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.410800 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b3a9-account-create-jk7cb" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.413326 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.441026 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b3a9-account-create-jk7cb"] Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.477307 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qnjg\" (UniqueName: \"kubernetes.io/projected/22ccc1a5-522d-4cf2-94d0-7f1122d28a33-kube-api-access-5qnjg\") pod \"nova-cell0-0005-account-create-js4rt\" (UID: \"22ccc1a5-522d-4cf2-94d0-7f1122d28a33\") " pod="openstack/nova-cell0-0005-account-create-js4rt" Nov 22 12:11:25 crc kubenswrapper[4772]: I1122 12:11:25.501926 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qnjg\" (UniqueName: \"kubernetes.io/projected/22ccc1a5-522d-4cf2-94d0-7f1122d28a33-kube-api-access-5qnjg\") pod \"nova-cell0-0005-account-create-js4rt\" (UID: \"22ccc1a5-522d-4cf2-94d0-7f1122d28a33\") " pod="openstack/nova-cell0-0005-account-create-js4rt" Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:25.577908 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0005-account-create-js4rt" Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:25.579526 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7f4\" (UniqueName: \"kubernetes.io/projected/568b1330-d837-411c-b6e3-2ace97a73e7b-kube-api-access-7w7f4\") pod \"nova-cell1-b3a9-account-create-jk7cb\" (UID: \"568b1330-d837-411c-b6e3-2ace97a73e7b\") " pod="openstack/nova-cell1-b3a9-account-create-jk7cb" Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:25.680884 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7f4\" (UniqueName: \"kubernetes.io/projected/568b1330-d837-411c-b6e3-2ace97a73e7b-kube-api-access-7w7f4\") pod \"nova-cell1-b3a9-account-create-jk7cb\" (UID: \"568b1330-d837-411c-b6e3-2ace97a73e7b\") " pod="openstack/nova-cell1-b3a9-account-create-jk7cb" Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:25.697380 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7f4\" (UniqueName: \"kubernetes.io/projected/568b1330-d837-411c-b6e3-2ace97a73e7b-kube-api-access-7w7f4\") pod \"nova-cell1-b3a9-account-create-jk7cb\" (UID: \"568b1330-d837-411c-b6e3-2ace97a73e7b\") " pod="openstack/nova-cell1-b3a9-account-create-jk7cb" Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:25.852520 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b3a9-account-create-jk7cb" Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:26.563763 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7f7a-account-create-kcdng"] Nov 22 12:11:26 crc kubenswrapper[4772]: W1122 12:11:26.570324 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf441febd_8436_4440_9e7b_9514e36d0c1d.slice/crio-dd2c84201c2f7eadcc38805490b8039823d52d4f24f250df3dc071f2bbb6143a WatchSource:0}: Error finding container dd2c84201c2f7eadcc38805490b8039823d52d4f24f250df3dc071f2bbb6143a: Status 404 returned error can't find the container with id dd2c84201c2f7eadcc38805490b8039823d52d4f24f250df3dc071f2bbb6143a Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:26.664451 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0005-account-create-js4rt"] Nov 22 12:11:26 crc kubenswrapper[4772]: I1122 12:11:26.672075 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b3a9-account-create-jk7cb"] Nov 22 12:11:26 crc kubenswrapper[4772]: W1122 12:11:26.680198 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22ccc1a5_522d_4cf2_94d0_7f1122d28a33.slice/crio-775a32b141ceed114979eb6de72496d6f02f79486ddf0fd4ed921930f7dda83a WatchSource:0}: Error finding container 775a32b141ceed114979eb6de72496d6f02f79486ddf0fd4ed921930f7dda83a: Status 404 returned error can't find the container with id 775a32b141ceed114979eb6de72496d6f02f79486ddf0fd4ed921930f7dda83a Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.570902 4772 generic.go:334] "Generic (PLEG): container finished" podID="22ccc1a5-522d-4cf2-94d0-7f1122d28a33" containerID="4b75f4b7dd1e6ec7e5650b4d1ad71341fa508812a0b07728ce9609fa7fd56f82" exitCode=0 Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.571477 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0005-account-create-js4rt" event={"ID":"22ccc1a5-522d-4cf2-94d0-7f1122d28a33","Type":"ContainerDied","Data":"4b75f4b7dd1e6ec7e5650b4d1ad71341fa508812a0b07728ce9609fa7fd56f82"} Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.571885 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0005-account-create-js4rt" event={"ID":"22ccc1a5-522d-4cf2-94d0-7f1122d28a33","Type":"ContainerStarted","Data":"775a32b141ceed114979eb6de72496d6f02f79486ddf0fd4ed921930f7dda83a"} Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.576185 4772 generic.go:334] "Generic (PLEG): container finished" podID="568b1330-d837-411c-b6e3-2ace97a73e7b" containerID="ae14805617ce3032fa2ca167cec6a5c87257f96d8d9aeb775ef4cf1ebdca6ce0" exitCode=0 Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.576307 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b3a9-account-create-jk7cb" event={"ID":"568b1330-d837-411c-b6e3-2ace97a73e7b","Type":"ContainerDied","Data":"ae14805617ce3032fa2ca167cec6a5c87257f96d8d9aeb775ef4cf1ebdca6ce0"} Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.576343 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b3a9-account-create-jk7cb" event={"ID":"568b1330-d837-411c-b6e3-2ace97a73e7b","Type":"ContainerStarted","Data":"c973cabbac2204acb23b99236c684a90bcfe051aafd1d1387e7676bd92856458"} Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.579470 4772 generic.go:334] "Generic (PLEG): container finished" podID="f441febd-8436-4440-9e7b-9514e36d0c1d" containerID="8810095c6d564f0eb04055f48e6a73cfd9045a307dd7d90253b237943519663f" exitCode=0 Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.579599 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7f7a-account-create-kcdng" event={"ID":"f441febd-8436-4440-9e7b-9514e36d0c1d","Type":"ContainerDied","Data":"8810095c6d564f0eb04055f48e6a73cfd9045a307dd7d90253b237943519663f"} Nov 22 12:11:27 crc kubenswrapper[4772]: I1122 12:11:27.579670 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7f7a-account-create-kcdng" event={"ID":"f441febd-8436-4440-9e7b-9514e36d0c1d","Type":"ContainerStarted","Data":"dd2c84201c2f7eadcc38805490b8039823d52d4f24f250df3dc071f2bbb6143a"} Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.032439 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b3a9-account-create-jk7cb" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.039809 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0005-account-create-js4rt" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.050341 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f7a-account-create-kcdng" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.151650 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w7f4\" (UniqueName: \"kubernetes.io/projected/568b1330-d837-411c-b6e3-2ace97a73e7b-kube-api-access-7w7f4\") pod \"568b1330-d837-411c-b6e3-2ace97a73e7b\" (UID: \"568b1330-d837-411c-b6e3-2ace97a73e7b\") " Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.151778 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qnjg\" (UniqueName: \"kubernetes.io/projected/22ccc1a5-522d-4cf2-94d0-7f1122d28a33-kube-api-access-5qnjg\") pod \"22ccc1a5-522d-4cf2-94d0-7f1122d28a33\" (UID: \"22ccc1a5-522d-4cf2-94d0-7f1122d28a33\") " Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.151947 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hh4g\" (UniqueName: \"kubernetes.io/projected/f441febd-8436-4440-9e7b-9514e36d0c1d-kube-api-access-8hh4g\") pod \"f441febd-8436-4440-9e7b-9514e36d0c1d\" (UID: \"f441febd-8436-4440-9e7b-9514e36d0c1d\") " Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.172269 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f441febd-8436-4440-9e7b-9514e36d0c1d-kube-api-access-8hh4g" (OuterVolumeSpecName: "kube-api-access-8hh4g") pod "f441febd-8436-4440-9e7b-9514e36d0c1d" (UID: "f441febd-8436-4440-9e7b-9514e36d0c1d"). InnerVolumeSpecName "kube-api-access-8hh4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.186879 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568b1330-d837-411c-b6e3-2ace97a73e7b-kube-api-access-7w7f4" (OuterVolumeSpecName: "kube-api-access-7w7f4") pod "568b1330-d837-411c-b6e3-2ace97a73e7b" (UID: "568b1330-d837-411c-b6e3-2ace97a73e7b"). InnerVolumeSpecName "kube-api-access-7w7f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.188796 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ccc1a5-522d-4cf2-94d0-7f1122d28a33-kube-api-access-5qnjg" (OuterVolumeSpecName: "kube-api-access-5qnjg") pod "22ccc1a5-522d-4cf2-94d0-7f1122d28a33" (UID: "22ccc1a5-522d-4cf2-94d0-7f1122d28a33"). InnerVolumeSpecName "kube-api-access-5qnjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.259220 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hh4g\" (UniqueName: \"kubernetes.io/projected/f441febd-8436-4440-9e7b-9514e36d0c1d-kube-api-access-8hh4g\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.259265 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w7f4\" (UniqueName: \"kubernetes.io/projected/568b1330-d837-411c-b6e3-2ace97a73e7b-kube-api-access-7w7f4\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.259277 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qnjg\" (UniqueName: \"kubernetes.io/projected/22ccc1a5-522d-4cf2-94d0-7f1122d28a33-kube-api-access-5qnjg\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.601852 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b3a9-account-create-jk7cb" event={"ID":"568b1330-d837-411c-b6e3-2ace97a73e7b","Type":"ContainerDied","Data":"c973cabbac2204acb23b99236c684a90bcfe051aafd1d1387e7676bd92856458"} Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.601901 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c973cabbac2204acb23b99236c684a90bcfe051aafd1d1387e7676bd92856458" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.601896 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b3a9-account-create-jk7cb" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.603661 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7f7a-account-create-kcdng" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.603754 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7f7a-account-create-kcdng" event={"ID":"f441febd-8436-4440-9e7b-9514e36d0c1d","Type":"ContainerDied","Data":"dd2c84201c2f7eadcc38805490b8039823d52d4f24f250df3dc071f2bbb6143a"} Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.604002 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd2c84201c2f7eadcc38805490b8039823d52d4f24f250df3dc071f2bbb6143a" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.606297 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0005-account-create-js4rt" event={"ID":"22ccc1a5-522d-4cf2-94d0-7f1122d28a33","Type":"ContainerDied","Data":"775a32b141ceed114979eb6de72496d6f02f79486ddf0fd4ed921930f7dda83a"} Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.606319 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="775a32b141ceed114979eb6de72496d6f02f79486ddf0fd4ed921930f7dda83a" Nov 22 12:11:29 crc kubenswrapper[4772]: I1122 12:11:29.606353 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0005-account-create-js4rt" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.433940 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cmq55"] Nov 22 12:11:30 crc kubenswrapper[4772]: E1122 12:11:30.435377 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f441febd-8436-4440-9e7b-9514e36d0c1d" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.435407 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f441febd-8436-4440-9e7b-9514e36d0c1d" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: E1122 12:11:30.435447 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ccc1a5-522d-4cf2-94d0-7f1122d28a33" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.435456 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ccc1a5-522d-4cf2-94d0-7f1122d28a33" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: E1122 12:11:30.435486 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568b1330-d837-411c-b6e3-2ace97a73e7b" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.435499 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="568b1330-d837-411c-b6e3-2ace97a73e7b" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.436096 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="568b1330-d837-411c-b6e3-2ace97a73e7b" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.436162 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f441febd-8436-4440-9e7b-9514e36d0c1d" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.436215 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ccc1a5-522d-4cf2-94d0-7f1122d28a33" containerName="mariadb-account-create" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.437530 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.447369 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zsd9k" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.447463 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.447933 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.486911 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.487467 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-config-data\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.487499 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-scripts\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.487517 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgkkp\" (UniqueName: \"kubernetes.io/projected/8d84fec3-5230-479b-8b1e-4ca40adc8928-kube-api-access-wgkkp\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.489673 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cmq55"] Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.589019 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.589136 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-config-data\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.589165 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-scripts\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.589198 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgkkp\" (UniqueName: \"kubernetes.io/projected/8d84fec3-5230-479b-8b1e-4ca40adc8928-kube-api-access-wgkkp\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.593170 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-scripts\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.596242 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-config-data\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.604178 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.609650 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgkkp\" (UniqueName: \"kubernetes.io/projected/8d84fec3-5230-479b-8b1e-4ca40adc8928-kube-api-access-wgkkp\") pod \"nova-cell0-conductor-db-sync-cmq55\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:30 crc kubenswrapper[4772]: I1122 12:11:30.787192 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.252127 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cmq55"] Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.533308 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.533383 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.533445 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.534409 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0559f4649dac5095b0b01f40c57f009ee300eceb2af79f78144e4e5d01c3049"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.534476 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://a0559f4649dac5095b0b01f40c57f009ee300eceb2af79f78144e4e5d01c3049" gracePeriod=600 Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.629917 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cmq55" event={"ID":"8d84fec3-5230-479b-8b1e-4ca40adc8928","Type":"ContainerStarted","Data":"975f8f00019cedb23c43c9153b469bed506b6112732c4558e3f1ae829c926dce"} Nov 22 12:11:31 crc kubenswrapper[4772]: I1122 12:11:31.630344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cmq55" event={"ID":"8d84fec3-5230-479b-8b1e-4ca40adc8928","Type":"ContainerStarted","Data":"eef47fbfeedfca6985417dfa3c49753c073f52c93bc6435c67dd7c48089f5768"} Nov 22 12:11:32 crc kubenswrapper[4772]: I1122 12:11:32.650242 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="a0559f4649dac5095b0b01f40c57f009ee300eceb2af79f78144e4e5d01c3049" exitCode=0 Nov 22 12:11:32 crc kubenswrapper[4772]: I1122 12:11:32.650344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"a0559f4649dac5095b0b01f40c57f009ee300eceb2af79f78144e4e5d01c3049"} Nov 22 12:11:32 crc kubenswrapper[4772]: I1122 12:11:32.650766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080"} Nov 22 12:11:32 crc kubenswrapper[4772]: I1122 12:11:32.650905 4772 scope.go:117] "RemoveContainer" containerID="b7c017a2ad8566061573012df3338326de3180226814eea67d7d515c52483472" Nov 22 12:11:32 crc kubenswrapper[4772]: I1122 12:11:32.679836 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cmq55" podStartSLOduration=2.67981114 podStartE2EDuration="2.67981114s" podCreationTimestamp="2025-11-22 12:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:31.655799063 +0000 UTC m=+5611.895243637" watchObservedRunningTime="2025-11-22 12:11:32.67981114 +0000 UTC m=+5612.919255644" Nov 22 12:11:37 crc kubenswrapper[4772]: I1122 12:11:37.706784 4772 generic.go:334] "Generic (PLEG): container finished" podID="8d84fec3-5230-479b-8b1e-4ca40adc8928" containerID="975f8f00019cedb23c43c9153b469bed506b6112732c4558e3f1ae829c926dce" exitCode=0 Nov 22 12:11:37 crc kubenswrapper[4772]: I1122 12:11:37.706906 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cmq55" event={"ID":"8d84fec3-5230-479b-8b1e-4ca40adc8928","Type":"ContainerDied","Data":"975f8f00019cedb23c43c9153b469bed506b6112732c4558e3f1ae829c926dce"} Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.043071 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.138997 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-combined-ca-bundle\") pod \"8d84fec3-5230-479b-8b1e-4ca40adc8928\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.139368 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-config-data\") pod \"8d84fec3-5230-479b-8b1e-4ca40adc8928\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.139532 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-scripts\") pod \"8d84fec3-5230-479b-8b1e-4ca40adc8928\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.139782 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgkkp\" (UniqueName: \"kubernetes.io/projected/8d84fec3-5230-479b-8b1e-4ca40adc8928-kube-api-access-wgkkp\") pod \"8d84fec3-5230-479b-8b1e-4ca40adc8928\" (UID: \"8d84fec3-5230-479b-8b1e-4ca40adc8928\") " Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.147255 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-scripts" (OuterVolumeSpecName: "scripts") pod "8d84fec3-5230-479b-8b1e-4ca40adc8928" (UID: "8d84fec3-5230-479b-8b1e-4ca40adc8928"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.147331 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d84fec3-5230-479b-8b1e-4ca40adc8928-kube-api-access-wgkkp" (OuterVolumeSpecName: "kube-api-access-wgkkp") pod "8d84fec3-5230-479b-8b1e-4ca40adc8928" (UID: "8d84fec3-5230-479b-8b1e-4ca40adc8928"). InnerVolumeSpecName "kube-api-access-wgkkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.168411 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d84fec3-5230-479b-8b1e-4ca40adc8928" (UID: "8d84fec3-5230-479b-8b1e-4ca40adc8928"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.168593 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-config-data" (OuterVolumeSpecName: "config-data") pod "8d84fec3-5230-479b-8b1e-4ca40adc8928" (UID: "8d84fec3-5230-479b-8b1e-4ca40adc8928"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.243122 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.243424 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.243477 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d84fec3-5230-479b-8b1e-4ca40adc8928-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.243524 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgkkp\" (UniqueName: \"kubernetes.io/projected/8d84fec3-5230-479b-8b1e-4ca40adc8928-kube-api-access-wgkkp\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.725789 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cmq55" event={"ID":"8d84fec3-5230-479b-8b1e-4ca40adc8928","Type":"ContainerDied","Data":"eef47fbfeedfca6985417dfa3c49753c073f52c93bc6435c67dd7c48089f5768"} Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.725840 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cmq55" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.725864 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef47fbfeedfca6985417dfa3c49753c073f52c93bc6435c67dd7c48089f5768" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.828239 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:11:39 crc kubenswrapper[4772]: E1122 12:11:39.829671 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d84fec3-5230-479b-8b1e-4ca40adc8928" containerName="nova-cell0-conductor-db-sync" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.829703 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d84fec3-5230-479b-8b1e-4ca40adc8928" containerName="nova-cell0-conductor-db-sync" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.830114 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d84fec3-5230-479b-8b1e-4ca40adc8928" containerName="nova-cell0-conductor-db-sync" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.830975 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.837214 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zsd9k" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.837374 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.845598 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.856672 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224n2\" (UniqueName: \"kubernetes.io/projected/442f4ec5-4bf2-475d-85c0-fddd99d233d7-kube-api-access-224n2\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.856733 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.856854 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.958628 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224n2\" (UniqueName: \"kubernetes.io/projected/442f4ec5-4bf2-475d-85c0-fddd99d233d7-kube-api-access-224n2\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.958921 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.959117 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.967139 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.975828 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:39 crc kubenswrapper[4772]: I1122 12:11:39.976040 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224n2\" (UniqueName: \"kubernetes.io/projected/442f4ec5-4bf2-475d-85c0-fddd99d233d7-kube-api-access-224n2\") pod \"nova-cell0-conductor-0\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:40 crc kubenswrapper[4772]: I1122 12:11:40.148594 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:40 crc kubenswrapper[4772]: I1122 12:11:40.707361 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:11:40 crc kubenswrapper[4772]: I1122 12:11:40.735551 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"442f4ec5-4bf2-475d-85c0-fddd99d233d7","Type":"ContainerStarted","Data":"ba9b57f6a6826f842bc331d5d90dccb3fcf24709fd3e6d4beef66e9d5a754106"} Nov 22 12:11:41 crc kubenswrapper[4772]: I1122 12:11:41.752554 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"442f4ec5-4bf2-475d-85c0-fddd99d233d7","Type":"ContainerStarted","Data":"6d0a8f6cd2301e34cd099092d2d30d82dec905a57947b14494d3fffac45b7a18"} Nov 22 12:11:41 crc kubenswrapper[4772]: I1122 12:11:41.753143 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:41 crc kubenswrapper[4772]: I1122 12:11:41.773925 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.773869392 podStartE2EDuration="2.773869392s" podCreationTimestamp="2025-11-22 12:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:41.770810586 +0000 UTC m=+5622.010255090" watchObservedRunningTime="2025-11-22 12:11:41.773869392 +0000 UTC m=+5622.013313896" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.180089 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.608604 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-djw46"] Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.610081 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.612640 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.612904 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.620893 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-djw46"] Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.701961 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-scripts\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.702031 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-config-data\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.702069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.702094 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qxh\" (UniqueName: \"kubernetes.io/projected/237dfef1-e073-42dc-8430-802b906015e7-kube-api-access-96qxh\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.780814 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.783003 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.794392 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.794675 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.803620 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.807712 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.811606 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.812296 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.812458 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.812520 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-scripts\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.812623 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-config-data\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.812670 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.812714 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96qxh\" (UniqueName: \"kubernetes.io/projected/237dfef1-e073-42dc-8430-802b906015e7-kube-api-access-96qxh\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.812845 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrv6x\" (UniqueName: \"kubernetes.io/projected/f3902926-0afb-4b6b-bd40-2a63cea31961-kube-api-access-vrv6x\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.837286 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-scripts\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.841792 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-config-data\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.845021 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.854758 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96qxh\" (UniqueName: \"kubernetes.io/projected/237dfef1-e073-42dc-8430-802b906015e7-kube-api-access-96qxh\") pod \"nova-cell0-cell-mapping-djw46\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.910407 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.932111 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.932283 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqtr\" (UniqueName: \"kubernetes.io/projected/7680c6fc-de4b-4ea0-944d-718399acb580-kube-api-access-zpqtr\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.932339 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7680c6fc-de4b-4ea0-944d-718399acb580-logs\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.932405 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-config-data\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.932476 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrv6x\" (UniqueName: \"kubernetes.io/projected/f3902926-0afb-4b6b-bd40-2a63cea31961-kube-api-access-vrv6x\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.932564 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.932629 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.934489 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.956470 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.962167 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:45 crc kubenswrapper[4772]: I1122 12:11:45.987280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrv6x\" (UniqueName: \"kubernetes.io/projected/f3902926-0afb-4b6b-bd40-2a63cea31961-kube-api-access-vrv6x\") pod \"nova-cell1-novncproxy-0\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.006306 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.012924 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.028628 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.035557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.035713 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqtr\" (UniqueName: \"kubernetes.io/projected/7680c6fc-de4b-4ea0-944d-718399acb580-kube-api-access-zpqtr\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.035774 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7680c6fc-de4b-4ea0-944d-718399acb580-logs\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.035826 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-config-data\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.037243 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7680c6fc-de4b-4ea0-944d-718399acb580-logs\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.055976 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.056126 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.057119 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-config-data\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.062328 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqtr\" (UniqueName: \"kubernetes.io/projected/7680c6fc-de4b-4ea0-944d-718399acb580-kube-api-access-zpqtr\") pod \"nova-api-0\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.070101 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.071870 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.074834 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.090921 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.106521 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.109164 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6645986957-d4fbm"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.110899 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.118829 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6645986957-d4fbm"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.138103 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-config-data\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.138156 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-logs\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.138179 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmxl\" (UniqueName: \"kubernetes.io/projected/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-kube-api-access-hkmxl\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.138203 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-config-data\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.138233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4px2j\" (UniqueName: \"kubernetes.io/projected/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-kube-api-access-4px2j\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.138266 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.138318 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.205468 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.240749 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-config-data\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.240993 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-logs\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.241034 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkmxl\" (UniqueName: \"kubernetes.io/projected/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-kube-api-access-hkmxl\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.241686 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-config-data\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.241715 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.241852 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.241894 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskwn\" (UniqueName: \"kubernetes.io/projected/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-kube-api-access-lskwn\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.241926 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4px2j\" (UniqueName: \"kubernetes.io/projected/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-kube-api-access-4px2j\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.242000 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.242103 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-config\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.242203 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.242257 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-dns-svc\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.242579 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-logs\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.248359 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-config-data\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.251258 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-config-data\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.251808 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.261987 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4px2j\" (UniqueName: \"kubernetes.io/projected/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-kube-api-access-4px2j\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.262669 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkmxl\" (UniqueName: \"kubernetes.io/projected/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-kube-api-access-hkmxl\") pod \"nova-metadata-0\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.267734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.345447 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-config\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.345544 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-dns-svc\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.345629 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.345663 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.345688 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lskwn\" (UniqueName: \"kubernetes.io/projected/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-kube-api-access-lskwn\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.347523 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-config\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.349021 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-sb\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.349553 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-dns-svc\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.349715 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-nb\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.366634 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lskwn\" (UniqueName: \"kubernetes.io/projected/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-kube-api-access-lskwn\") pod \"dnsmasq-dns-6645986957-d4fbm\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.376760 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.394739 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.438917 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.481366 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-djw46"] Nov 22 12:11:46 crc kubenswrapper[4772]: W1122 12:11:46.500592 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod237dfef1_e073_42dc_8430_802b906015e7.slice/crio-eacfcfbed6363752d4a32a20ccdc47668ba3f5af05a495c56ac275e88a7659c1 WatchSource:0}: Error finding container eacfcfbed6363752d4a32a20ccdc47668ba3f5af05a495c56ac275e88a7659c1: Status 404 returned error can't find the container with id eacfcfbed6363752d4a32a20ccdc47668ba3f5af05a495c56ac275e88a7659c1 Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.549145 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pr6j5"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.550407 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.555273 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.555417 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.576310 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pr6j5"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.639505 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.651639 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-config-data\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.651754 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-scripts\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.651995 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnbdw\" (UniqueName: \"kubernetes.io/projected/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-kube-api-access-lnbdw\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.652251 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.743129 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:46 crc kubenswrapper[4772]: W1122 12:11:46.748154 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7680c6fc_de4b_4ea0_944d_718399acb580.slice/crio-ad68df18e0b4f37c7187aa44d87a3c31921dee15866fecf988362cee4bb2c556 WatchSource:0}: Error finding container ad68df18e0b4f37c7187aa44d87a3c31921dee15866fecf988362cee4bb2c556: Status 404 returned error can't find the container with id ad68df18e0b4f37c7187aa44d87a3c31921dee15866fecf988362cee4bb2c556 Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.753868 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-config-data\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.754601 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-scripts\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.754729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnbdw\" (UniqueName: \"kubernetes.io/projected/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-kube-api-access-lnbdw\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.754818 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.758169 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-config-data\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.759125 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.761511 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-scripts\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.776618 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnbdw\" (UniqueName: \"kubernetes.io/projected/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-kube-api-access-lnbdw\") pod \"nova-cell1-conductor-db-sync-pr6j5\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.816811 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f3902926-0afb-4b6b-bd40-2a63cea31961","Type":"ContainerStarted","Data":"c60fb6298072a0995ecc14b35a78062ee48f38cf8fee1114bf3dfd23b0b05997"} Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.818555 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-djw46" event={"ID":"237dfef1-e073-42dc-8430-802b906015e7","Type":"ContainerStarted","Data":"1d0435c73fc7da817dcfea45c654eaf0f5a9ebbfd50ce85c1abd07cd6c6168eb"} Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.818606 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-djw46" event={"ID":"237dfef1-e073-42dc-8430-802b906015e7","Type":"ContainerStarted","Data":"eacfcfbed6363752d4a32a20ccdc47668ba3f5af05a495c56ac275e88a7659c1"} Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.820126 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7680c6fc-de4b-4ea0-944d-718399acb580","Type":"ContainerStarted","Data":"ad68df18e0b4f37c7187aa44d87a3c31921dee15866fecf988362cee4bb2c556"} Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.847965 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-djw46" podStartSLOduration=1.847943726 podStartE2EDuration="1.847943726s" podCreationTimestamp="2025-11-22 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:46.83965091 +0000 UTC m=+5627.079095414" watchObservedRunningTime="2025-11-22 12:11:46.847943726 +0000 UTC m=+5627.087388220" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.872328 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:46 crc kubenswrapper[4772]: I1122 12:11:46.908882 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:46 crc kubenswrapper[4772]: W1122 12:11:46.919564 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb021c8c5_98b1_437a_86cf_fcf7f5cc83c8.slice/crio-540bd0bbf5d0e3fab81b3222046a2f1faa65ecec9112ba39528d73f8acbc964b WatchSource:0}: Error finding container 540bd0bbf5d0e3fab81b3222046a2f1faa65ecec9112ba39528d73f8acbc964b: Status 404 returned error can't find the container with id 540bd0bbf5d0e3fab81b3222046a2f1faa65ecec9112ba39528d73f8acbc964b Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.000168 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.014550 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6645986957-d4fbm"] Nov 22 12:11:47 crc kubenswrapper[4772]: W1122 12:11:47.016379 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cd4fa62_b413_4a62_a3e0_1f7fc455c96e.slice/crio-98f854c8a67985c169dda0b863bc8faa79f30aa6b40f45750559faeda4dd1aee WatchSource:0}: Error finding container 98f854c8a67985c169dda0b863bc8faa79f30aa6b40f45750559faeda4dd1aee: Status 404 returned error can't find the container with id 98f854c8a67985c169dda0b863bc8faa79f30aa6b40f45750559faeda4dd1aee Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.357021 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pr6j5"] Nov 22 12:11:47 crc kubenswrapper[4772]: W1122 12:11:47.413063 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76dd9bf3_3e2b_4ff9_b1e2_598d8167622c.slice/crio-ecb566145762b71d7dae7d2c0f56aa01f7e82675796c8ca8a2d27e79270d1519 WatchSource:0}: Error finding container ecb566145762b71d7dae7d2c0f56aa01f7e82675796c8ca8a2d27e79270d1519: Status 404 returned error can't find the container with id ecb566145762b71d7dae7d2c0f56aa01f7e82675796c8ca8a2d27e79270d1519 Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.830233 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e","Type":"ContainerStarted","Data":"8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.830282 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e","Type":"ContainerStarted","Data":"98f854c8a67985c169dda0b863bc8faa79f30aa6b40f45750559faeda4dd1aee"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.831626 4772 generic.go:334] "Generic (PLEG): container finished" podID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerID="629ee177b629698367fc8581ca4ecc7a964168f04717f01e5e36129f91b5382c" exitCode=0 Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.831721 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6645986957-d4fbm" event={"ID":"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f","Type":"ContainerDied","Data":"629ee177b629698367fc8581ca4ecc7a964168f04717f01e5e36129f91b5382c"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.832090 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6645986957-d4fbm" event={"ID":"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f","Type":"ContainerStarted","Data":"b4fc8f6603e613ef49e5a361fbb00b87fa75c5b2d7ca84bcffd0dada841a3198"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.834205 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f3902926-0afb-4b6b-bd40-2a63cea31961","Type":"ContainerStarted","Data":"5f13128f4f1030ee3c875872b042914ce4ca921ce771380dd581329dd9be8ec2"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.847627 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7680c6fc-de4b-4ea0-944d-718399acb580","Type":"ContainerStarted","Data":"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.847684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7680c6fc-de4b-4ea0-944d-718399acb580","Type":"ContainerStarted","Data":"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.859372 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8","Type":"ContainerStarted","Data":"e0cae61288e1c98067cb59768857aaa1d3459c55f26279b01053f45c731982fd"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.859429 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8","Type":"ContainerStarted","Data":"1cabe5404fd23ab8ec61c17ddec70671588d945d43c4ab79724d9a84fa75dffb"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.859444 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8","Type":"ContainerStarted","Data":"540bd0bbf5d0e3fab81b3222046a2f1faa65ecec9112ba39528d73f8acbc964b"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.862866 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.862836724 podStartE2EDuration="2.862836724s" podCreationTimestamp="2025-11-22 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:47.852094847 +0000 UTC m=+5628.091539361" watchObservedRunningTime="2025-11-22 12:11:47.862836724 +0000 UTC m=+5628.102281228" Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.862997 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" event={"ID":"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c","Type":"ContainerStarted","Data":"6dc644dd739b3dc370a09046e7e77f2a4e7e6130aa1be761e4e0f58e1446ba01"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.863064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" event={"ID":"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c","Type":"ContainerStarted","Data":"ecb566145762b71d7dae7d2c0f56aa01f7e82675796c8ca8a2d27e79270d1519"} Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.935502 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.935479123 podStartE2EDuration="2.935479123s" podCreationTimestamp="2025-11-22 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:47.907170128 +0000 UTC m=+5628.146614622" watchObservedRunningTime="2025-11-22 12:11:47.935479123 +0000 UTC m=+5628.174923617" Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.979653 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.979620982 podStartE2EDuration="2.979620982s" podCreationTimestamp="2025-11-22 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:47.929690379 +0000 UTC m=+5628.169134873" watchObservedRunningTime="2025-11-22 12:11:47.979620982 +0000 UTC m=+5628.219065476" Nov 22 12:11:47 crc kubenswrapper[4772]: I1122 12:11:47.993350 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.993328153 podStartE2EDuration="2.993328153s" podCreationTimestamp="2025-11-22 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:47.96909609 +0000 UTC m=+5628.208540594" watchObservedRunningTime="2025-11-22 12:11:47.993328153 +0000 UTC m=+5628.232772647" Nov 22 12:11:48 crc kubenswrapper[4772]: I1122 12:11:48.010185 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" podStartSLOduration=2.010156512 podStartE2EDuration="2.010156512s" podCreationTimestamp="2025-11-22 12:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:47.989292793 +0000 UTC m=+5628.228737297" watchObservedRunningTime="2025-11-22 12:11:48.010156512 +0000 UTC m=+5628.249601006" Nov 22 12:11:48 crc kubenswrapper[4772]: I1122 12:11:48.875262 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6645986957-d4fbm" event={"ID":"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f","Type":"ContainerStarted","Data":"3a381bf4afebb5ed3cc848428d6206696d4417f29f7557b51d7da0bfa88f56ca"} Nov 22 12:11:48 crc kubenswrapper[4772]: I1122 12:11:48.875570 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:48 crc kubenswrapper[4772]: I1122 12:11:48.903322 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6645986957-d4fbm" podStartSLOduration=3.90329829 podStartE2EDuration="3.90329829s" podCreationTimestamp="2025-11-22 12:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:48.899896985 +0000 UTC m=+5629.139341489" watchObservedRunningTime="2025-11-22 12:11:48.90329829 +0000 UTC m=+5629.142742784" Nov 22 12:11:50 crc kubenswrapper[4772]: I1122 12:11:50.897402 4772 generic.go:334] "Generic (PLEG): container finished" podID="76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" containerID="6dc644dd739b3dc370a09046e7e77f2a4e7e6130aa1be761e4e0f58e1446ba01" exitCode=0 Nov 22 12:11:50 crc kubenswrapper[4772]: I1122 12:11:50.897495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" event={"ID":"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c","Type":"ContainerDied","Data":"6dc644dd739b3dc370a09046e7e77f2a4e7e6130aa1be761e4e0f58e1446ba01"} Nov 22 12:11:51 crc kubenswrapper[4772]: I1122 12:11:51.107299 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:51 crc kubenswrapper[4772]: I1122 12:11:51.378426 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:11:51 crc kubenswrapper[4772]: I1122 12:11:51.378494 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:11:51 crc kubenswrapper[4772]: I1122 12:11:51.395670 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 12:11:51 crc kubenswrapper[4772]: I1122 12:11:51.912328 4772 generic.go:334] "Generic (PLEG): container finished" podID="237dfef1-e073-42dc-8430-802b906015e7" containerID="1d0435c73fc7da817dcfea45c654eaf0f5a9ebbfd50ce85c1abd07cd6c6168eb" exitCode=0 Nov 22 12:11:51 crc kubenswrapper[4772]: I1122 12:11:51.912535 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-djw46" event={"ID":"237dfef1-e073-42dc-8430-802b906015e7","Type":"ContainerDied","Data":"1d0435c73fc7da817dcfea45c654eaf0f5a9ebbfd50ce85c1abd07cd6c6168eb"} Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.289618 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.390554 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-config-data\") pod \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.390836 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-scripts\") pod \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.390917 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnbdw\" (UniqueName: \"kubernetes.io/projected/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-kube-api-access-lnbdw\") pod \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.390981 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-combined-ca-bundle\") pod \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\" (UID: \"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c\") " Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.396907 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-kube-api-access-lnbdw" (OuterVolumeSpecName: "kube-api-access-lnbdw") pod "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" (UID: "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c"). InnerVolumeSpecName "kube-api-access-lnbdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.401485 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-scripts" (OuterVolumeSpecName: "scripts") pod "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" (UID: "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.427876 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" (UID: "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.435464 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-config-data" (OuterVolumeSpecName: "config-data") pod "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" (UID: "76dd9bf3-3e2b-4ff9-b1e2-598d8167622c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.499792 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.499839 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnbdw\" (UniqueName: \"kubernetes.io/projected/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-kube-api-access-lnbdw\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.499854 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.499865 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.923895 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.924008 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pr6j5" event={"ID":"76dd9bf3-3e2b-4ff9-b1e2-598d8167622c","Type":"ContainerDied","Data":"ecb566145762b71d7dae7d2c0f56aa01f7e82675796c8ca8a2d27e79270d1519"} Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.924070 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb566145762b71d7dae7d2c0f56aa01f7e82675796c8ca8a2d27e79270d1519" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.986584 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:11:52 crc kubenswrapper[4772]: E1122 12:11:52.987033 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" containerName="nova-cell1-conductor-db-sync" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.987742 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" containerName="nova-cell1-conductor-db-sync" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.987970 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" containerName="nova-cell1-conductor-db-sync" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.988770 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.991191 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 12:11:52 crc kubenswrapper[4772]: I1122 12:11:52.996331 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.112128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.112320 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfg67\" (UniqueName: \"kubernetes.io/projected/80e0105c-1be9-4fa6-b46f-263e87770142-kube-api-access-nfg67\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.112369 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.213583 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfg67\" (UniqueName: \"kubernetes.io/projected/80e0105c-1be9-4fa6-b46f-263e87770142-kube-api-access-nfg67\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.213645 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.213704 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.219001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.219226 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.231349 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfg67\" (UniqueName: \"kubernetes.io/projected/80e0105c-1be9-4fa6-b46f-263e87770142-kube-api-access-nfg67\") pod \"nova-cell1-conductor-0\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.307623 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.325747 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.522972 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-scripts\") pod \"237dfef1-e073-42dc-8430-802b906015e7\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.523340 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-config-data\") pod \"237dfef1-e073-42dc-8430-802b906015e7\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.523395 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-combined-ca-bundle\") pod \"237dfef1-e073-42dc-8430-802b906015e7\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.523515 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96qxh\" (UniqueName: \"kubernetes.io/projected/237dfef1-e073-42dc-8430-802b906015e7-kube-api-access-96qxh\") pod \"237dfef1-e073-42dc-8430-802b906015e7\" (UID: \"237dfef1-e073-42dc-8430-802b906015e7\") " Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.531737 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-scripts" (OuterVolumeSpecName: "scripts") pod "237dfef1-e073-42dc-8430-802b906015e7" (UID: "237dfef1-e073-42dc-8430-802b906015e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.540232 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237dfef1-e073-42dc-8430-802b906015e7-kube-api-access-96qxh" (OuterVolumeSpecName: "kube-api-access-96qxh") pod "237dfef1-e073-42dc-8430-802b906015e7" (UID: "237dfef1-e073-42dc-8430-802b906015e7"). InnerVolumeSpecName "kube-api-access-96qxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.552323 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "237dfef1-e073-42dc-8430-802b906015e7" (UID: "237dfef1-e073-42dc-8430-802b906015e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.557240 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-config-data" (OuterVolumeSpecName: "config-data") pod "237dfef1-e073-42dc-8430-802b906015e7" (UID: "237dfef1-e073-42dc-8430-802b906015e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.625792 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.625821 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.625833 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96qxh\" (UniqueName: \"kubernetes.io/projected/237dfef1-e073-42dc-8430-802b906015e7-kube-api-access-96qxh\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.625844 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237dfef1-e073-42dc-8430-802b906015e7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.774945 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.935879 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-djw46" event={"ID":"237dfef1-e073-42dc-8430-802b906015e7","Type":"ContainerDied","Data":"eacfcfbed6363752d4a32a20ccdc47668ba3f5af05a495c56ac275e88a7659c1"} Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.936547 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eacfcfbed6363752d4a32a20ccdc47668ba3f5af05a495c56ac275e88a7659c1" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.935895 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-djw46" Nov 22 12:11:53 crc kubenswrapper[4772]: I1122 12:11:53.938022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80e0105c-1be9-4fa6-b46f-263e87770142","Type":"ContainerStarted","Data":"b060a4a9240d550e1275c5d613ffe77d8d6e63aa1f4772006fe817acf2f8709b"} Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.122762 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.123007 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-log" containerID="cri-o://84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef" gracePeriod=30 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.123171 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-api" containerID="cri-o://7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8" gracePeriod=30 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.179263 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.179500 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" containerName="nova-scheduler-scheduler" containerID="cri-o://8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2" gracePeriod=30 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.189851 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.190115 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-log" containerID="cri-o://1cabe5404fd23ab8ec61c17ddec70671588d945d43c4ab79724d9a84fa75dffb" gracePeriod=30 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.190204 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-metadata" containerID="cri-o://e0cae61288e1c98067cb59768857aaa1d3459c55f26279b01053f45c731982fd" gracePeriod=30 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.705337 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.847139 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7680c6fc-de4b-4ea0-944d-718399acb580-logs\") pod \"7680c6fc-de4b-4ea0-944d-718399acb580\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.847253 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqtr\" (UniqueName: \"kubernetes.io/projected/7680c6fc-de4b-4ea0-944d-718399acb580-kube-api-access-zpqtr\") pod \"7680c6fc-de4b-4ea0-944d-718399acb580\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.847326 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-config-data\") pod \"7680c6fc-de4b-4ea0-944d-718399acb580\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.847363 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-combined-ca-bundle\") pod \"7680c6fc-de4b-4ea0-944d-718399acb580\" (UID: \"7680c6fc-de4b-4ea0-944d-718399acb580\") " Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.848176 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7680c6fc-de4b-4ea0-944d-718399acb580-logs" (OuterVolumeSpecName: "logs") pod "7680c6fc-de4b-4ea0-944d-718399acb580" (UID: "7680c6fc-de4b-4ea0-944d-718399acb580"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.853466 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7680c6fc-de4b-4ea0-944d-718399acb580-kube-api-access-zpqtr" (OuterVolumeSpecName: "kube-api-access-zpqtr") pod "7680c6fc-de4b-4ea0-944d-718399acb580" (UID: "7680c6fc-de4b-4ea0-944d-718399acb580"). InnerVolumeSpecName "kube-api-access-zpqtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.874411 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7680c6fc-de4b-4ea0-944d-718399acb580" (UID: "7680c6fc-de4b-4ea0-944d-718399acb580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.880793 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-config-data" (OuterVolumeSpecName: "config-data") pod "7680c6fc-de4b-4ea0-944d-718399acb580" (UID: "7680c6fc-de4b-4ea0-944d-718399acb580"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.947987 4772 generic.go:334] "Generic (PLEG): container finished" podID="7680c6fc-de4b-4ea0-944d-718399acb580" containerID="7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8" exitCode=0 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.948032 4772 generic.go:334] "Generic (PLEG): container finished" podID="7680c6fc-de4b-4ea0-944d-718399acb580" containerID="84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef" exitCode=143 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.948152 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7680c6fc-de4b-4ea0-944d-718399acb580","Type":"ContainerDied","Data":"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8"} Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.948187 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7680c6fc-de4b-4ea0-944d-718399acb580","Type":"ContainerDied","Data":"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef"} Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.948200 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7680c6fc-de4b-4ea0-944d-718399acb580","Type":"ContainerDied","Data":"ad68df18e0b4f37c7187aa44d87a3c31921dee15866fecf988362cee4bb2c556"} Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.948221 4772 scope.go:117] "RemoveContainer" containerID="7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.949842 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.950725 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7680c6fc-de4b-4ea0-944d-718399acb580-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.951267 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqtr\" (UniqueName: \"kubernetes.io/projected/7680c6fc-de4b-4ea0-944d-718399acb580-kube-api-access-zpqtr\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.951355 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.951369 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7680c6fc-de4b-4ea0-944d-718399acb580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.954496 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80e0105c-1be9-4fa6-b46f-263e87770142","Type":"ContainerStarted","Data":"1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f"} Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.954889 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.968430 4772 generic.go:334] "Generic (PLEG): container finished" podID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerID="e0cae61288e1c98067cb59768857aaa1d3459c55f26279b01053f45c731982fd" exitCode=0 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.968465 4772 generic.go:334] "Generic (PLEG): container finished" podID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerID="1cabe5404fd23ab8ec61c17ddec70671588d945d43c4ab79724d9a84fa75dffb" exitCode=143 Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.968494 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8","Type":"ContainerDied","Data":"e0cae61288e1c98067cb59768857aaa1d3459c55f26279b01053f45c731982fd"} Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.968529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8","Type":"ContainerDied","Data":"1cabe5404fd23ab8ec61c17ddec70671588d945d43c4ab79724d9a84fa75dffb"} Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.981838 4772 scope.go:117] "RemoveContainer" containerID="84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef" Nov 22 12:11:54 crc kubenswrapper[4772]: I1122 12:11:54.985519 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.985500874 podStartE2EDuration="2.985500874s" podCreationTimestamp="2025-11-22 12:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:54.973167447 +0000 UTC m=+5635.212611971" watchObservedRunningTime="2025-11-22 12:11:54.985500874 +0000 UTC m=+5635.224945368" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.003448 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.017902 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.023896 4772 scope.go:117] "RemoveContainer" containerID="7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8" Nov 22 12:11:55 crc kubenswrapper[4772]: E1122 12:11:55.026514 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8\": container with ID starting with 7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8 not found: ID does not exist" containerID="7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.026566 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8"} err="failed to get container status \"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8\": rpc error: code = NotFound desc = could not find container \"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8\": container with ID starting with 7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8 not found: ID does not exist" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.026598 4772 scope.go:117] "RemoveContainer" containerID="84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef" Nov 22 12:11:55 crc kubenswrapper[4772]: E1122 12:11:55.026905 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef\": container with ID starting with 84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef not found: ID does not exist" containerID="84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.026947 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef"} err="failed to get container status \"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef\": rpc error: code = NotFound desc = could not find container \"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef\": container with ID starting with 84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef not found: ID does not exist" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.026977 4772 scope.go:117] "RemoveContainer" containerID="7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.028232 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8"} err="failed to get container status \"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8\": rpc error: code = NotFound desc = could not find container \"7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8\": container with ID starting with 7d3ec8d7dd24966130b44b823541272bd25bba77d0a05eeb61135feb7ab640f8 not found: ID does not exist" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.028253 4772 scope.go:117] "RemoveContainer" containerID="84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.028486 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef"} err="failed to get container status \"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef\": rpc error: code = NotFound desc = could not find container \"84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef\": container with ID starting with 84e8584ba671adffbf9e8386896af33f0e827be71cf2d0231be1a8c5c830a7ef not found: ID does not exist" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.039919 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:55 crc kubenswrapper[4772]: E1122 12:11:55.040497 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-log" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.040534 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-log" Nov 22 12:11:55 crc kubenswrapper[4772]: E1122 12:11:55.040549 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-api" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.040556 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-api" Nov 22 12:11:55 crc kubenswrapper[4772]: E1122 12:11:55.040582 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237dfef1-e073-42dc-8430-802b906015e7" containerName="nova-manage" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.040589 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="237dfef1-e073-42dc-8430-802b906015e7" containerName="nova-manage" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.040771 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-log" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.040801 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" containerName="nova-api-api" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.040815 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="237dfef1-e073-42dc-8430-802b906015e7" containerName="nova-manage" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.042101 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.044828 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.050428 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.155527 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-config-data\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.155671 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1bcdfc-43de-450a-8dfc-7a2c61450832-logs\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.155889 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcn5t\" (UniqueName: \"kubernetes.io/projected/de1bcdfc-43de-450a-8dfc-7a2c61450832-kube-api-access-tcn5t\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.156076 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.257278 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1bcdfc-43de-450a-8dfc-7a2c61450832-logs\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.257359 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcn5t\" (UniqueName: \"kubernetes.io/projected/de1bcdfc-43de-450a-8dfc-7a2c61450832-kube-api-access-tcn5t\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.257406 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.257472 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-config-data\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.257808 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1bcdfc-43de-450a-8dfc-7a2c61450832-logs\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.261934 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.262037 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-config-data\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.283493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcn5t\" (UniqueName: \"kubernetes.io/projected/de1bcdfc-43de-450a-8dfc-7a2c61450832-kube-api-access-tcn5t\") pod \"nova-api-0\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.350872 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.363448 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.427578 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7680c6fc-de4b-4ea0-944d-718399acb580" path="/var/lib/kubelet/pods/7680c6fc-de4b-4ea0-944d-718399acb580/volumes" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.460082 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-logs\") pod \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.460181 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-combined-ca-bundle\") pod \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.460213 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-config-data\") pod \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.460357 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkmxl\" (UniqueName: \"kubernetes.io/projected/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-kube-api-access-hkmxl\") pod \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\" (UID: \"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8\") " Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.462184 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-logs" (OuterVolumeSpecName: "logs") pod "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" (UID: "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.470293 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-kube-api-access-hkmxl" (OuterVolumeSpecName: "kube-api-access-hkmxl") pod "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" (UID: "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8"). InnerVolumeSpecName "kube-api-access-hkmxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.505806 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" (UID: "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.511876 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-config-data" (OuterVolumeSpecName: "config-data") pod "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" (UID: "b021c8c5-98b1-437a-86cf-fcf7f5cc83c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.562713 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkmxl\" (UniqueName: \"kubernetes.io/projected/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-kube-api-access-hkmxl\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.562756 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.562768 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.562777 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.858243 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.992477 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b021c8c5-98b1-437a-86cf-fcf7f5cc83c8","Type":"ContainerDied","Data":"540bd0bbf5d0e3fab81b3222046a2f1faa65ecec9112ba39528d73f8acbc964b"} Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.992565 4772 scope.go:117] "RemoveContainer" containerID="e0cae61288e1c98067cb59768857aaa1d3459c55f26279b01053f45c731982fd" Nov 22 12:11:55 crc kubenswrapper[4772]: I1122 12:11:55.992680 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.003454 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de1bcdfc-43de-450a-8dfc-7a2c61450832","Type":"ContainerStarted","Data":"38ab179840961b7847d65006765ccf4f95636ecd21804ce3adc24a8362059556"} Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.037318 4772 scope.go:117] "RemoveContainer" containerID="1cabe5404fd23ab8ec61c17ddec70671588d945d43c4ab79724d9a84fa75dffb" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.044469 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.079523 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.092786 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:56 crc kubenswrapper[4772]: E1122 12:11:56.093417 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-log" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.093444 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-log" Nov 22 12:11:56 crc kubenswrapper[4772]: E1122 12:11:56.093493 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-metadata" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.093505 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-metadata" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.093870 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-log" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.093902 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" containerName="nova-metadata-metadata" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.095687 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.099424 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.103953 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.107348 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.128755 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.278904 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-logs\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.279445 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.279483 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-config-data\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.279521 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mp9n\" (UniqueName: \"kubernetes.io/projected/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-kube-api-access-7mp9n\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.381020 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-logs\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.381398 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.381495 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-config-data\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.381575 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mp9n\" (UniqueName: \"kubernetes.io/projected/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-kube-api-access-7mp9n\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.381582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-logs\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.391758 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.392917 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-config-data\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.412371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mp9n\" (UniqueName: \"kubernetes.io/projected/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-kube-api-access-7mp9n\") pod \"nova-metadata-0\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.429019 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.440288 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.527042 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc96bcddc-hfvg4"] Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.527720 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" podUID="3f3d2fee-4291-428c-b664-888afddd412c" containerName="dnsmasq-dns" containerID="cri-o://59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288" gracePeriod=10 Nov 22 12:11:56 crc kubenswrapper[4772]: W1122 12:11:56.959445 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9765c8d_8fb9_44d7_b82a_eec65fd4c966.slice/crio-a9b588650b93775eaa7cc08e5697950fc51725a462c61d0ccc792a64ab13c2a3 WatchSource:0}: Error finding container a9b588650b93775eaa7cc08e5697950fc51725a462c61d0ccc792a64ab13c2a3: Status 404 returned error can't find the container with id a9b588650b93775eaa7cc08e5697950fc51725a462c61d0ccc792a64ab13c2a3 Nov 22 12:11:56 crc kubenswrapper[4772]: I1122 12:11:56.964576 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.006682 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.017239 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9765c8d-8fb9-44d7-b82a-eec65fd4c966","Type":"ContainerStarted","Data":"a9b588650b93775eaa7cc08e5697950fc51725a462c61d0ccc792a64ab13c2a3"} Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.021479 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de1bcdfc-43de-450a-8dfc-7a2c61450832","Type":"ContainerStarted","Data":"2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f"} Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.021537 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de1bcdfc-43de-450a-8dfc-7a2c61450832","Type":"ContainerStarted","Data":"4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5"} Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.038691 4772 generic.go:334] "Generic (PLEG): container finished" podID="3f3d2fee-4291-428c-b664-888afddd412c" containerID="59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288" exitCode=0 Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.038745 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" event={"ID":"3f3d2fee-4291-428c-b664-888afddd412c","Type":"ContainerDied","Data":"59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288"} Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.038803 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" event={"ID":"3f3d2fee-4291-428c-b664-888afddd412c","Type":"ContainerDied","Data":"ceaff874c7dc2c1ce71b3f76adbe90dbf35099d51e27af44cdbed237ba148b7d"} Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.038831 4772 scope.go:117] "RemoveContainer" containerID="59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.038988 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc96bcddc-hfvg4" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.054939 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.086333 4772 scope.go:117] "RemoveContainer" containerID="bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.087158 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.087140721 podStartE2EDuration="3.087140721s" podCreationTimestamp="2025-11-22 12:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:57.052806446 +0000 UTC m=+5637.292250940" watchObservedRunningTime="2025-11-22 12:11:57.087140721 +0000 UTC m=+5637.326585215" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.106368 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-nb\") pod \"3f3d2fee-4291-428c-b664-888afddd412c\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.106479 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-sb\") pod \"3f3d2fee-4291-428c-b664-888afddd412c\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.106530 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-config\") pod \"3f3d2fee-4291-428c-b664-888afddd412c\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.106555 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-dns-svc\") pod \"3f3d2fee-4291-428c-b664-888afddd412c\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.106628 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqr2d\" (UniqueName: \"kubernetes.io/projected/3f3d2fee-4291-428c-b664-888afddd412c-kube-api-access-hqr2d\") pod \"3f3d2fee-4291-428c-b664-888afddd412c\" (UID: \"3f3d2fee-4291-428c-b664-888afddd412c\") " Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.127431 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3d2fee-4291-428c-b664-888afddd412c-kube-api-access-hqr2d" (OuterVolumeSpecName: "kube-api-access-hqr2d") pod "3f3d2fee-4291-428c-b664-888afddd412c" (UID: "3f3d2fee-4291-428c-b664-888afddd412c"). InnerVolumeSpecName "kube-api-access-hqr2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.149775 4772 scope.go:117] "RemoveContainer" containerID="59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288" Nov 22 12:11:57 crc kubenswrapper[4772]: E1122 12:11:57.154610 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288\": container with ID starting with 59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288 not found: ID does not exist" containerID="59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.154655 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288"} err="failed to get container status \"59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288\": rpc error: code = NotFound desc = could not find container \"59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288\": container with ID starting with 59eda251079280ad251f66b766c259c940c976ce6d5b38f0e66e5368ccbb8288 not found: ID does not exist" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.154686 4772 scope.go:117] "RemoveContainer" containerID="bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01" Nov 22 12:11:57 crc kubenswrapper[4772]: E1122 12:11:57.155239 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01\": container with ID starting with bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01 not found: ID does not exist" containerID="bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.155269 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01"} err="failed to get container status \"bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01\": rpc error: code = NotFound desc = could not find container \"bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01\": container with ID starting with bcfde62addff43da38833137bb389679b0877760da0f70cba4aaf7948756eb01 not found: ID does not exist" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.192212 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f3d2fee-4291-428c-b664-888afddd412c" (UID: "3f3d2fee-4291-428c-b664-888afddd412c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.200809 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f3d2fee-4291-428c-b664-888afddd412c" (UID: "3f3d2fee-4291-428c-b664-888afddd412c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.209020 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.209081 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.209092 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqr2d\" (UniqueName: \"kubernetes.io/projected/3f3d2fee-4291-428c-b664-888afddd412c-kube-api-access-hqr2d\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.220917 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-config" (OuterVolumeSpecName: "config") pod "3f3d2fee-4291-428c-b664-888afddd412c" (UID: "3f3d2fee-4291-428c-b664-888afddd412c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.229808 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f3d2fee-4291-428c-b664-888afddd412c" (UID: "3f3d2fee-4291-428c-b664-888afddd412c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.310642 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.310682 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3d2fee-4291-428c-b664-888afddd412c-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.386954 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc96bcddc-hfvg4"] Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.398327 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dc96bcddc-hfvg4"] Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.425834 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f3d2fee-4291-428c-b664-888afddd412c" path="/var/lib/kubelet/pods/3f3d2fee-4291-428c-b664-888afddd412c/volumes" Nov 22 12:11:57 crc kubenswrapper[4772]: I1122 12:11:57.426539 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b021c8c5-98b1-437a-86cf-fcf7f5cc83c8" path="/var/lib/kubelet/pods/b021c8c5-98b1-437a-86cf-fcf7f5cc83c8/volumes" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.054203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9765c8d-8fb9-44d7-b82a-eec65fd4c966","Type":"ContainerStarted","Data":"389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810"} Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.054690 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9765c8d-8fb9-44d7-b82a-eec65fd4c966","Type":"ContainerStarted","Data":"e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179"} Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.342185 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.380777 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.380744328 podStartE2EDuration="2.380744328s" podCreationTimestamp="2025-11-22 12:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:11:58.085234591 +0000 UTC m=+5638.324679095" watchObservedRunningTime="2025-11-22 12:11:58.380744328 +0000 UTC m=+5638.620188832" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.875587 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.965692 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k458h"] Nov 22 12:11:58 crc kubenswrapper[4772]: E1122 12:11:58.966125 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3d2fee-4291-428c-b664-888afddd412c" containerName="init" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.966140 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3d2fee-4291-428c-b664-888afddd412c" containerName="init" Nov 22 12:11:58 crc kubenswrapper[4772]: E1122 12:11:58.966151 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3d2fee-4291-428c-b664-888afddd412c" containerName="dnsmasq-dns" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.966157 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3d2fee-4291-428c-b664-888afddd412c" containerName="dnsmasq-dns" Nov 22 12:11:58 crc kubenswrapper[4772]: E1122 12:11:58.966179 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" containerName="nova-scheduler-scheduler" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.966186 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" containerName="nova-scheduler-scheduler" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.966347 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3d2fee-4291-428c-b664-888afddd412c" containerName="dnsmasq-dns" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.966370 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" containerName="nova-scheduler-scheduler" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.967013 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.972505 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.972822 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 22 12:11:58 crc kubenswrapper[4772]: I1122 12:11:58.982036 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k458h"] Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.040909 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4px2j\" (UniqueName: \"kubernetes.io/projected/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-kube-api-access-4px2j\") pod \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.040993 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-combined-ca-bundle\") pod \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.041036 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-config-data\") pod \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\" (UID: \"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e\") " Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.041545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.041641 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-scripts\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.041777 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c4rp\" (UniqueName: \"kubernetes.io/projected/a50c316c-347c-455b-b0ec-93d5a1c07934-kube-api-access-9c4rp\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.041844 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-config-data\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.060449 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-kube-api-access-4px2j" (OuterVolumeSpecName: "kube-api-access-4px2j") pod "3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" (UID: "3cd4fa62-b413-4a62-a3e0-1f7fc455c96e"). InnerVolumeSpecName "kube-api-access-4px2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.070654 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-config-data" (OuterVolumeSpecName: "config-data") pod "3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" (UID: "3cd4fa62-b413-4a62-a3e0-1f7fc455c96e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.070682 4772 generic.go:334] "Generic (PLEG): container finished" podID="3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" containerID="8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2" exitCode=0 Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.070738 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.070742 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e","Type":"ContainerDied","Data":"8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2"} Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.070791 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3cd4fa62-b413-4a62-a3e0-1f7fc455c96e","Type":"ContainerDied","Data":"98f854c8a67985c169dda0b863bc8faa79f30aa6b40f45750559faeda4dd1aee"} Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.070813 4772 scope.go:117] "RemoveContainer" containerID="8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.087036 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" (UID: "3cd4fa62-b413-4a62-a3e0-1f7fc455c96e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.143740 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c4rp\" (UniqueName: \"kubernetes.io/projected/a50c316c-347c-455b-b0ec-93d5a1c07934-kube-api-access-9c4rp\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.144176 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-config-data\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.144237 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.144314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-scripts\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.144374 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4px2j\" (UniqueName: \"kubernetes.io/projected/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-kube-api-access-4px2j\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.144385 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.144399 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.147599 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-scripts\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.148219 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-config-data\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.148698 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.153624 4772 scope.go:117] "RemoveContainer" containerID="8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2" Nov 22 12:11:59 crc kubenswrapper[4772]: E1122 12:11:59.154132 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2\": container with ID starting with 8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2 not found: ID does not exist" containerID="8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.154171 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2"} err="failed to get container status \"8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2\": rpc error: code = NotFound desc = could not find container \"8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2\": container with ID starting with 8550460200cfba468c34776fe54b86b90452fc8f609132c3edff68cb0c86aee2 not found: ID does not exist" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.173433 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c4rp\" (UniqueName: \"kubernetes.io/projected/a50c316c-347c-455b-b0ec-93d5a1c07934-kube-api-access-9c4rp\") pod \"nova-cell1-cell-mapping-k458h\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.296244 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.453624 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.453662 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.453682 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.457248 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.460914 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.469725 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.551590 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.551726 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-config-data\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.551940 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw484\" (UniqueName: \"kubernetes.io/projected/470edc72-faea-4c87-bdf4-f5aae3137ecf-kube-api-access-vw484\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.653397 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-config-data\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.653555 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw484\" (UniqueName: \"kubernetes.io/projected/470edc72-faea-4c87-bdf4-f5aae3137ecf-kube-api-access-vw484\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.653626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.658834 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-config-data\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.658937 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.673335 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw484\" (UniqueName: \"kubernetes.io/projected/470edc72-faea-4c87-bdf4-f5aae3137ecf-kube-api-access-vw484\") pod \"nova-scheduler-0\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " pod="openstack/nova-scheduler-0" Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.792315 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k458h"] Nov 22 12:11:59 crc kubenswrapper[4772]: I1122 12:11:59.838840 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:12:00 crc kubenswrapper[4772]: I1122 12:12:00.089614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k458h" event={"ID":"a50c316c-347c-455b-b0ec-93d5a1c07934","Type":"ContainerStarted","Data":"ea9aa0bf7c1baff74ec4f40b2e28d1595dbd35bab973ffd9d6ba78108553b35d"} Nov 22 12:12:00 crc kubenswrapper[4772]: I1122 12:12:00.091187 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k458h" event={"ID":"a50c316c-347c-455b-b0ec-93d5a1c07934","Type":"ContainerStarted","Data":"5436ea5a3ca351b99448183186e7497add2791b3cb0feadfff75c5ed0be2e472"} Nov 22 12:12:00 crc kubenswrapper[4772]: I1122 12:12:00.109442 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k458h" podStartSLOduration=2.109416969 podStartE2EDuration="2.109416969s" podCreationTimestamp="2025-11-22 12:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:12:00.103927812 +0000 UTC m=+5640.343372306" watchObservedRunningTime="2025-11-22 12:12:00.109416969 +0000 UTC m=+5640.348861463" Nov 22 12:12:00 crc kubenswrapper[4772]: I1122 12:12:00.296833 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:12:01 crc kubenswrapper[4772]: I1122 12:12:01.102594 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"470edc72-faea-4c87-bdf4-f5aae3137ecf","Type":"ContainerStarted","Data":"9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702"} Nov 22 12:12:01 crc kubenswrapper[4772]: I1122 12:12:01.102842 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"470edc72-faea-4c87-bdf4-f5aae3137ecf","Type":"ContainerStarted","Data":"631ced900974d0cf91d25bb5c25b3d92eac42d132ae4feb04bba29be0eacfbf9"} Nov 22 12:12:01 crc kubenswrapper[4772]: I1122 12:12:01.124824 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.124799759 podStartE2EDuration="2.124799759s" podCreationTimestamp="2025-11-22 12:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:12:01.123493617 +0000 UTC m=+5641.362938141" watchObservedRunningTime="2025-11-22 12:12:01.124799759 +0000 UTC m=+5641.364244283" Nov 22 12:12:01 crc kubenswrapper[4772]: I1122 12:12:01.425541 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cd4fa62-b413-4a62-a3e0-1f7fc455c96e" path="/var/lib/kubelet/pods/3cd4fa62-b413-4a62-a3e0-1f7fc455c96e/volumes" Nov 22 12:12:01 crc kubenswrapper[4772]: I1122 12:12:01.429732 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:12:01 crc kubenswrapper[4772]: I1122 12:12:01.430537 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:12:04 crc kubenswrapper[4772]: I1122 12:12:04.840192 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 12:12:05 crc kubenswrapper[4772]: I1122 12:12:05.149341 4772 generic.go:334] "Generic (PLEG): container finished" podID="a50c316c-347c-455b-b0ec-93d5a1c07934" containerID="ea9aa0bf7c1baff74ec4f40b2e28d1595dbd35bab973ffd9d6ba78108553b35d" exitCode=0 Nov 22 12:12:05 crc kubenswrapper[4772]: I1122 12:12:05.149427 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k458h" event={"ID":"a50c316c-347c-455b-b0ec-93d5a1c07934","Type":"ContainerDied","Data":"ea9aa0bf7c1baff74ec4f40b2e28d1595dbd35bab973ffd9d6ba78108553b35d"} Nov 22 12:12:05 crc kubenswrapper[4772]: I1122 12:12:05.364190 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 12:12:05 crc kubenswrapper[4772]: I1122 12:12:05.364294 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.430202 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.430585 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.448296 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.68:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.448653 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.68:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.643570 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.717406 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-combined-ca-bundle\") pod \"a50c316c-347c-455b-b0ec-93d5a1c07934\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.717673 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-config-data\") pod \"a50c316c-347c-455b-b0ec-93d5a1c07934\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.717923 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c4rp\" (UniqueName: \"kubernetes.io/projected/a50c316c-347c-455b-b0ec-93d5a1c07934-kube-api-access-9c4rp\") pod \"a50c316c-347c-455b-b0ec-93d5a1c07934\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.718077 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-scripts\") pod \"a50c316c-347c-455b-b0ec-93d5a1c07934\" (UID: \"a50c316c-347c-455b-b0ec-93d5a1c07934\") " Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.727227 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-scripts" (OuterVolumeSpecName: "scripts") pod "a50c316c-347c-455b-b0ec-93d5a1c07934" (UID: "a50c316c-347c-455b-b0ec-93d5a1c07934"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.731171 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50c316c-347c-455b-b0ec-93d5a1c07934-kube-api-access-9c4rp" (OuterVolumeSpecName: "kube-api-access-9c4rp") pod "a50c316c-347c-455b-b0ec-93d5a1c07934" (UID: "a50c316c-347c-455b-b0ec-93d5a1c07934"). InnerVolumeSpecName "kube-api-access-9c4rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.744262 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a50c316c-347c-455b-b0ec-93d5a1c07934" (UID: "a50c316c-347c-455b-b0ec-93d5a1c07934"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.745913 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-config-data" (OuterVolumeSpecName: "config-data") pod "a50c316c-347c-455b-b0ec-93d5a1c07934" (UID: "a50c316c-347c-455b-b0ec-93d5a1c07934"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.821133 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c4rp\" (UniqueName: \"kubernetes.io/projected/a50c316c-347c-455b-b0ec-93d5a1c07934-kube-api-access-9c4rp\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.821177 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.821191 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:06 crc kubenswrapper[4772]: I1122 12:12:06.821205 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50c316c-347c-455b-b0ec-93d5a1c07934-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.172321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k458h" event={"ID":"a50c316c-347c-455b-b0ec-93d5a1c07934","Type":"ContainerDied","Data":"5436ea5a3ca351b99448183186e7497add2791b3cb0feadfff75c5ed0be2e472"} Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.172370 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5436ea5a3ca351b99448183186e7497add2791b3cb0feadfff75c5ed0be2e472" Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.172397 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k458h" Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.397184 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.397466 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-log" containerID="cri-o://4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5" gracePeriod=30 Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.397618 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-api" containerID="cri-o://2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f" gracePeriod=30 Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.411701 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.411938 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="470edc72-faea-4c87-bdf4-f5aae3137ecf" containerName="nova-scheduler-scheduler" containerID="cri-o://9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702" gracePeriod=30 Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.453356 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.453571 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-log" containerID="cri-o://e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179" gracePeriod=30 Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.454062 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-metadata" containerID="cri-o://389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810" gracePeriod=30 Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.468462 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.69:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:07 crc kubenswrapper[4772]: I1122 12:12:07.468771 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.69:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:08 crc kubenswrapper[4772]: I1122 12:12:08.183224 4772 generic.go:334] "Generic (PLEG): container finished" podID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerID="4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5" exitCode=143 Nov 22 12:12:08 crc kubenswrapper[4772]: I1122 12:12:08.183501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de1bcdfc-43de-450a-8dfc-7a2c61450832","Type":"ContainerDied","Data":"4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5"} Nov 22 12:12:08 crc kubenswrapper[4772]: I1122 12:12:08.185522 4772 generic.go:334] "Generic (PLEG): container finished" podID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerID="e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179" exitCode=143 Nov 22 12:12:08 crc kubenswrapper[4772]: I1122 12:12:08.185552 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9765c8d-8fb9-44d7-b82a-eec65fd4c966","Type":"ContainerDied","Data":"e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179"} Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.013389 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.123709 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-config-data\") pod \"470edc72-faea-4c87-bdf4-f5aae3137ecf\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.124018 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-combined-ca-bundle\") pod \"470edc72-faea-4c87-bdf4-f5aae3137ecf\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.124730 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw484\" (UniqueName: \"kubernetes.io/projected/470edc72-faea-4c87-bdf4-f5aae3137ecf-kube-api-access-vw484\") pod \"470edc72-faea-4c87-bdf4-f5aae3137ecf\" (UID: \"470edc72-faea-4c87-bdf4-f5aae3137ecf\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.135948 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470edc72-faea-4c87-bdf4-f5aae3137ecf-kube-api-access-vw484" (OuterVolumeSpecName: "kube-api-access-vw484") pod "470edc72-faea-4c87-bdf4-f5aae3137ecf" (UID: "470edc72-faea-4c87-bdf4-f5aae3137ecf"). InnerVolumeSpecName "kube-api-access-vw484". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.149983 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.151746 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "470edc72-faea-4c87-bdf4-f5aae3137ecf" (UID: "470edc72-faea-4c87-bdf4-f5aae3137ecf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.159179 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-config-data" (OuterVolumeSpecName: "config-data") pod "470edc72-faea-4c87-bdf4-f5aae3137ecf" (UID: "470edc72-faea-4c87-bdf4-f5aae3137ecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.189886 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.226909 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-combined-ca-bundle\") pod \"de1bcdfc-43de-450a-8dfc-7a2c61450832\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.227171 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1bcdfc-43de-450a-8dfc-7a2c61450832-logs\") pod \"de1bcdfc-43de-450a-8dfc-7a2c61450832\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.227263 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcn5t\" (UniqueName: \"kubernetes.io/projected/de1bcdfc-43de-450a-8dfc-7a2c61450832-kube-api-access-tcn5t\") pod \"de1bcdfc-43de-450a-8dfc-7a2c61450832\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.227400 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-config-data\") pod \"de1bcdfc-43de-450a-8dfc-7a2c61450832\" (UID: \"de1bcdfc-43de-450a-8dfc-7a2c61450832\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.227770 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.227786 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw484\" (UniqueName: \"kubernetes.io/projected/470edc72-faea-4c87-bdf4-f5aae3137ecf-kube-api-access-vw484\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.227796 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470edc72-faea-4c87-bdf4-f5aae3137ecf-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.230721 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1bcdfc-43de-450a-8dfc-7a2c61450832-logs" (OuterVolumeSpecName: "logs") pod "de1bcdfc-43de-450a-8dfc-7a2c61450832" (UID: "de1bcdfc-43de-450a-8dfc-7a2c61450832"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.233953 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1bcdfc-43de-450a-8dfc-7a2c61450832-kube-api-access-tcn5t" (OuterVolumeSpecName: "kube-api-access-tcn5t") pod "de1bcdfc-43de-450a-8dfc-7a2c61450832" (UID: "de1bcdfc-43de-450a-8dfc-7a2c61450832"). InnerVolumeSpecName "kube-api-access-tcn5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.238423 4772 generic.go:334] "Generic (PLEG): container finished" podID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerID="389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810" exitCode=0 Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.238559 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9765c8d-8fb9-44d7-b82a-eec65fd4c966","Type":"ContainerDied","Data":"389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810"} Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.238654 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9765c8d-8fb9-44d7-b82a-eec65fd4c966","Type":"ContainerDied","Data":"a9b588650b93775eaa7cc08e5697950fc51725a462c61d0ccc792a64ab13c2a3"} Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.238721 4772 scope.go:117] "RemoveContainer" containerID="389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.238898 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.242242 4772 generic.go:334] "Generic (PLEG): container finished" podID="470edc72-faea-4c87-bdf4-f5aae3137ecf" containerID="9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702" exitCode=0 Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.242436 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.242556 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"470edc72-faea-4c87-bdf4-f5aae3137ecf","Type":"ContainerDied","Data":"9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702"} Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.242701 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"470edc72-faea-4c87-bdf4-f5aae3137ecf","Type":"ContainerDied","Data":"631ced900974d0cf91d25bb5c25b3d92eac42d132ae4feb04bba29be0eacfbf9"} Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.244838 4772 generic.go:334] "Generic (PLEG): container finished" podID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerID="2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f" exitCode=0 Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.244889 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de1bcdfc-43de-450a-8dfc-7a2c61450832","Type":"ContainerDied","Data":"2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f"} Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.244920 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"de1bcdfc-43de-450a-8dfc-7a2c61450832","Type":"ContainerDied","Data":"38ab179840961b7847d65006765ccf4f95636ecd21804ce3adc24a8362059556"} Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.245000 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.254778 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-config-data" (OuterVolumeSpecName: "config-data") pod "de1bcdfc-43de-450a-8dfc-7a2c61450832" (UID: "de1bcdfc-43de-450a-8dfc-7a2c61450832"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.266854 4772 scope.go:117] "RemoveContainer" containerID="e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.282865 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1bcdfc-43de-450a-8dfc-7a2c61450832" (UID: "de1bcdfc-43de-450a-8dfc-7a2c61450832"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.317131 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.320464 4772 scope.go:117] "RemoveContainer" containerID="389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.323073 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810\": container with ID starting with 389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810 not found: ID does not exist" containerID="389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.323333 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810"} err="failed to get container status \"389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810\": rpc error: code = NotFound desc = could not find container \"389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810\": container with ID starting with 389161ed2a73744f667085bb72de035f81d4f79a598838a8489c004b1f551810 not found: ID does not exist" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.323435 4772 scope.go:117] "RemoveContainer" containerID="e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.327457 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179\": container with ID starting with e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179 not found: ID does not exist" containerID="e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.327521 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179"} err="failed to get container status \"e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179\": rpc error: code = NotFound desc = could not find container \"e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179\": container with ID starting with e2624792d066ab138f6fb2257b306575a875628081c179ae634876df54f50179 not found: ID does not exist" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.327579 4772 scope.go:117] "RemoveContainer" containerID="9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.331997 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-logs\") pod \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.332263 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-config-data\") pod \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.332424 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-combined-ca-bundle\") pod \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.332624 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mp9n\" (UniqueName: \"kubernetes.io/projected/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-kube-api-access-7mp9n\") pod \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\" (UID: \"a9765c8d-8fb9-44d7-b82a-eec65fd4c966\") " Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.332642 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-logs" (OuterVolumeSpecName: "logs") pod "a9765c8d-8fb9-44d7-b82a-eec65fd4c966" (UID: "a9765c8d-8fb9-44d7-b82a-eec65fd4c966"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.337087 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.337662 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1bcdfc-43de-450a-8dfc-7a2c61450832-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.337724 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1bcdfc-43de-450a-8dfc-7a2c61450832-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.337735 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcn5t\" (UniqueName: \"kubernetes.io/projected/de1bcdfc-43de-450a-8dfc-7a2c61450832-kube-api-access-tcn5t\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.339538 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-kube-api-access-7mp9n" (OuterVolumeSpecName: "kube-api-access-7mp9n") pod "a9765c8d-8fb9-44d7-b82a-eec65fd4c966" (UID: "a9765c8d-8fb9-44d7-b82a-eec65fd4c966"). InnerVolumeSpecName "kube-api-access-7mp9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.355309 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.356334 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9765c8d-8fb9-44d7-b82a-eec65fd4c966" (UID: "a9765c8d-8fb9-44d7-b82a-eec65fd4c966"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.360278 4772 scope.go:117] "RemoveContainer" containerID="9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.361406 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702\": container with ID starting with 9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702 not found: ID does not exist" containerID="9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.361451 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702"} err="failed to get container status \"9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702\": rpc error: code = NotFound desc = could not find container \"9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702\": container with ID starting with 9643485e890088c31555bc64b499b30cb1fcf77e0d1ef06c3510c2b499709702 not found: ID does not exist" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.361481 4772 scope.go:117] "RemoveContainer" containerID="2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.368633 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.369142 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-api" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.369180 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-api" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.369195 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50c316c-347c-455b-b0ec-93d5a1c07934" containerName="nova-manage" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.369203 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50c316c-347c-455b-b0ec-93d5a1c07934" containerName="nova-manage" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.369232 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-log" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.369241 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-log" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.369258 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-log" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.369268 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-log" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.369304 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-metadata" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.369314 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-metadata" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.369330 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470edc72-faea-4c87-bdf4-f5aae3137ecf" containerName="nova-scheduler-scheduler" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.369337 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="470edc72-faea-4c87-bdf4-f5aae3137ecf" containerName="nova-scheduler-scheduler" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.371835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-config-data" (OuterVolumeSpecName: "config-data") pod "a9765c8d-8fb9-44d7-b82a-eec65fd4c966" (UID: "a9765c8d-8fb9-44d7-b82a-eec65fd4c966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.373349 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="470edc72-faea-4c87-bdf4-f5aae3137ecf" containerName="nova-scheduler-scheduler" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.373385 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-log" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.373407 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" containerName="nova-metadata-metadata" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.373435 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50c316c-347c-455b-b0ec-93d5a1c07934" containerName="nova-manage" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.373443 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-log" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.373466 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" containerName="nova-api-api" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.374345 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.376249 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.376858 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.389690 4772 scope.go:117] "RemoveContainer" containerID="4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.405842 4772 scope.go:117] "RemoveContainer" containerID="2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.406260 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f\": container with ID starting with 2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f not found: ID does not exist" containerID="2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.406326 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f"} err="failed to get container status \"2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f\": rpc error: code = NotFound desc = could not find container \"2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f\": container with ID starting with 2a8c2bbfe2c2bce2b71619395e5c67ea36c323e3a8170dcb02705f97846c5a4f not found: ID does not exist" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.406362 4772 scope.go:117] "RemoveContainer" containerID="4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5" Nov 22 12:12:12 crc kubenswrapper[4772]: E1122 12:12:12.406677 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5\": container with ID starting with 4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5 not found: ID does not exist" containerID="4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.406705 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5"} err="failed to get container status \"4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5\": rpc error: code = NotFound desc = could not find container \"4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5\": container with ID starting with 4c2dee80ee306554d882b76621ff8e92e90cf00faa3c495192ad2a9535d8a5a5 not found: ID does not exist" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.439297 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6c5m\" (UniqueName: \"kubernetes.io/projected/0979b32f-aa79-489a-b5ec-ccfa90e847af-kube-api-access-c6c5m\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.439366 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.439505 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-config-data\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.439737 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.439788 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.439809 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.439828 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mp9n\" (UniqueName: \"kubernetes.io/projected/a9765c8d-8fb9-44d7-b82a-eec65fd4c966-kube-api-access-7mp9n\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.541093 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.541305 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-config-data\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.541350 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6c5m\" (UniqueName: \"kubernetes.io/projected/0979b32f-aa79-489a-b5ec-ccfa90e847af-kube-api-access-c6c5m\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.545286 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.547958 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-config-data\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.558386 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6c5m\" (UniqueName: \"kubernetes.io/projected/0979b32f-aa79-489a-b5ec-ccfa90e847af-kube-api-access-c6c5m\") pod \"nova-scheduler-0\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.577118 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.595448 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.616028 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.641851 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.644468 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.647634 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.651531 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.671832 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.685922 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.687930 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.690152 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.693860 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.694388 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.744807 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-config-data\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.744867 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-config-data\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.744902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83115fda-d180-46ed-b9f0-60ad3bfb6707-logs\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.744932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.744980 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.745022 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s92j\" (UniqueName: \"kubernetes.io/projected/83115fda-d180-46ed-b9f0-60ad3bfb6707-kube-api-access-7s92j\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.745101 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d3f16f-387d-4fd0-8e35-3512f30a5f63-logs\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.745144 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njnm2\" (UniqueName: \"kubernetes.io/projected/48d3f16f-387d-4fd0-8e35-3512f30a5f63-kube-api-access-njnm2\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.849482 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-config-data\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.849812 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-config-data\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.849846 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83115fda-d180-46ed-b9f0-60ad3bfb6707-logs\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.850229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.850411 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.850461 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s92j\" (UniqueName: \"kubernetes.io/projected/83115fda-d180-46ed-b9f0-60ad3bfb6707-kube-api-access-7s92j\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.850494 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d3f16f-387d-4fd0-8e35-3512f30a5f63-logs\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.850537 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njnm2\" (UniqueName: \"kubernetes.io/projected/48d3f16f-387d-4fd0-8e35-3512f30a5f63-kube-api-access-njnm2\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.853285 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d3f16f-387d-4fd0-8e35-3512f30a5f63-logs\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.856384 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83115fda-d180-46ed-b9f0-60ad3bfb6707-logs\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.860170 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-config-data\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.860840 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.863169 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.865193 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-config-data\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.868739 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s92j\" (UniqueName: \"kubernetes.io/projected/83115fda-d180-46ed-b9f0-60ad3bfb6707-kube-api-access-7s92j\") pod \"nova-metadata-0\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " pod="openstack/nova-metadata-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.869028 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njnm2\" (UniqueName: \"kubernetes.io/projected/48d3f16f-387d-4fd0-8e35-3512f30a5f63-kube-api-access-njnm2\") pod \"nova-api-0\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " pod="openstack/nova-api-0" Nov 22 12:12:12 crc kubenswrapper[4772]: I1122 12:12:12.975315 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.009012 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.137098 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:12:13 crc kubenswrapper[4772]: W1122 12:12:13.144852 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0979b32f_aa79_489a_b5ec_ccfa90e847af.slice/crio-747484e8399c2ce69195cce2c8d685e84d6b0fd44f0662571a84232a8541326d WatchSource:0}: Error finding container 747484e8399c2ce69195cce2c8d685e84d6b0fd44f0662571a84232a8541326d: Status 404 returned error can't find the container with id 747484e8399c2ce69195cce2c8d685e84d6b0fd44f0662571a84232a8541326d Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.256305 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0979b32f-aa79-489a-b5ec-ccfa90e847af","Type":"ContainerStarted","Data":"747484e8399c2ce69195cce2c8d685e84d6b0fd44f0662571a84232a8541326d"} Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.426668 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470edc72-faea-4c87-bdf4-f5aae3137ecf" path="/var/lib/kubelet/pods/470edc72-faea-4c87-bdf4-f5aae3137ecf/volumes" Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.427623 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9765c8d-8fb9-44d7-b82a-eec65fd4c966" path="/var/lib/kubelet/pods/a9765c8d-8fb9-44d7-b82a-eec65fd4c966/volumes" Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.428465 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1bcdfc-43de-450a-8dfc-7a2c61450832" path="/var/lib/kubelet/pods/de1bcdfc-43de-450a-8dfc-7a2c61450832/volumes" Nov 22 12:12:13 crc kubenswrapper[4772]: W1122 12:12:13.451574 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83115fda_d180_46ed_b9f0_60ad3bfb6707.slice/crio-78c94eafa4fa5138052a96a735e8e89118af7057d1a376bdd2661391bee6d05c WatchSource:0}: Error finding container 78c94eafa4fa5138052a96a735e8e89118af7057d1a376bdd2661391bee6d05c: Status 404 returned error can't find the container with id 78c94eafa4fa5138052a96a735e8e89118af7057d1a376bdd2661391bee6d05c Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.454337 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:12:13 crc kubenswrapper[4772]: I1122 12:12:13.514706 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:12:13 crc kubenswrapper[4772]: W1122 12:12:13.519231 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48d3f16f_387d_4fd0_8e35_3512f30a5f63.slice/crio-824b7d85294c3c52f8bca76acacf846be74ae8208abaa63e3e045294daa64695 WatchSource:0}: Error finding container 824b7d85294c3c52f8bca76acacf846be74ae8208abaa63e3e045294daa64695: Status 404 returned error can't find the container with id 824b7d85294c3c52f8bca76acacf846be74ae8208abaa63e3e045294daa64695 Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.278886 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48d3f16f-387d-4fd0-8e35-3512f30a5f63","Type":"ContainerStarted","Data":"684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84"} Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.279392 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48d3f16f-387d-4fd0-8e35-3512f30a5f63","Type":"ContainerStarted","Data":"87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9"} Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.279500 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48d3f16f-387d-4fd0-8e35-3512f30a5f63","Type":"ContainerStarted","Data":"824b7d85294c3c52f8bca76acacf846be74ae8208abaa63e3e045294daa64695"} Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.281580 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0979b32f-aa79-489a-b5ec-ccfa90e847af","Type":"ContainerStarted","Data":"4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f"} Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.286122 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83115fda-d180-46ed-b9f0-60ad3bfb6707","Type":"ContainerStarted","Data":"82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58"} Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.286171 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83115fda-d180-46ed-b9f0-60ad3bfb6707","Type":"ContainerStarted","Data":"7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19"} Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.286191 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83115fda-d180-46ed-b9f0-60ad3bfb6707","Type":"ContainerStarted","Data":"78c94eafa4fa5138052a96a735e8e89118af7057d1a376bdd2661391bee6d05c"} Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.327433 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.327413987 podStartE2EDuration="2.327413987s" podCreationTimestamp="2025-11-22 12:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:12:14.304187758 +0000 UTC m=+5654.543632262" watchObservedRunningTime="2025-11-22 12:12:14.327413987 +0000 UTC m=+5654.566858481" Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.329536 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.329527469 podStartE2EDuration="2.329527469s" podCreationTimestamp="2025-11-22 12:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:12:14.324869423 +0000 UTC m=+5654.564313917" watchObservedRunningTime="2025-11-22 12:12:14.329527469 +0000 UTC m=+5654.568971963" Nov 22 12:12:14 crc kubenswrapper[4772]: I1122 12:12:14.345867 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.345846396 podStartE2EDuration="2.345846396s" podCreationTimestamp="2025-11-22 12:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:12:14.343863326 +0000 UTC m=+5654.583307820" watchObservedRunningTime="2025-11-22 12:12:14.345846396 +0000 UTC m=+5654.585290890" Nov 22 12:12:17 crc kubenswrapper[4772]: I1122 12:12:17.694845 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 12:12:17 crc kubenswrapper[4772]: I1122 12:12:17.975893 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:12:17 crc kubenswrapper[4772]: I1122 12:12:17.976084 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:12:22 crc kubenswrapper[4772]: I1122 12:12:22.695278 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 12:12:22 crc kubenswrapper[4772]: I1122 12:12:22.729430 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 12:12:22 crc kubenswrapper[4772]: I1122 12:12:22.976347 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 12:12:22 crc kubenswrapper[4772]: I1122 12:12:22.976949 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 12:12:23 crc kubenswrapper[4772]: I1122 12:12:23.009784 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 12:12:23 crc kubenswrapper[4772]: I1122 12:12:23.009846 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 12:12:23 crc kubenswrapper[4772]: I1122 12:12:23.463644 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 12:12:24 crc kubenswrapper[4772]: I1122 12:12:24.142296 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.73:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:24 crc kubenswrapper[4772]: I1122 12:12:24.142342 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.74:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:24 crc kubenswrapper[4772]: I1122 12:12:24.142359 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.73:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:24 crc kubenswrapper[4772]: I1122 12:12:24.142342 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.74:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:12:32 crc kubenswrapper[4772]: I1122 12:12:32.978241 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 12:12:32 crc kubenswrapper[4772]: I1122 12:12:32.979034 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 12:12:32 crc kubenswrapper[4772]: I1122 12:12:32.983077 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 12:12:32 crc kubenswrapper[4772]: I1122 12:12:32.989771 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.016620 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.017135 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.022591 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.024621 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.501913 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.505881 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.715268 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bb7467f-z97hh"] Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.717600 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.734748 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bb7467f-z97hh"] Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.783623 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-sb\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.783748 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-nb\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.783788 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-config\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.783848 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-dns-svc\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.783876 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gksxm\" (UniqueName: \"kubernetes.io/projected/9a28400c-54e4-428e-80ee-28e494a10303-kube-api-access-gksxm\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.885390 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-nb\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.885458 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-config\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.885530 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-dns-svc\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.885570 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gksxm\" (UniqueName: \"kubernetes.io/projected/9a28400c-54e4-428e-80ee-28e494a10303-kube-api-access-gksxm\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.885636 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-sb\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.886478 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-dns-svc\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.886683 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-nb\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.886698 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-config\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.886855 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-sb\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:33 crc kubenswrapper[4772]: I1122 12:12:33.912600 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gksxm\" (UniqueName: \"kubernetes.io/projected/9a28400c-54e4-428e-80ee-28e494a10303-kube-api-access-gksxm\") pod \"dnsmasq-dns-58bb7467f-z97hh\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:34 crc kubenswrapper[4772]: I1122 12:12:34.051228 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:34 crc kubenswrapper[4772]: I1122 12:12:34.492244 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bb7467f-z97hh"] Nov 22 12:12:34 crc kubenswrapper[4772]: I1122 12:12:34.522658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" event={"ID":"9a28400c-54e4-428e-80ee-28e494a10303","Type":"ContainerStarted","Data":"f7f714b2d51886dcc433363f7f9fa795952ea2daa03f550b5498609c64046ed5"} Nov 22 12:12:35 crc kubenswrapper[4772]: I1122 12:12:35.533256 4772 generic.go:334] "Generic (PLEG): container finished" podID="9a28400c-54e4-428e-80ee-28e494a10303" containerID="fbde4ab41f54bba3c0306628c33c67309cf47bf2b5885c814b855671f2caa6be" exitCode=0 Nov 22 12:12:35 crc kubenswrapper[4772]: I1122 12:12:35.536330 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" event={"ID":"9a28400c-54e4-428e-80ee-28e494a10303","Type":"ContainerDied","Data":"fbde4ab41f54bba3c0306628c33c67309cf47bf2b5885c814b855671f2caa6be"} Nov 22 12:12:36 crc kubenswrapper[4772]: I1122 12:12:36.545081 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" event={"ID":"9a28400c-54e4-428e-80ee-28e494a10303","Type":"ContainerStarted","Data":"4b6ba7dc669a25f90f3515b5a608a72c038b82c343d60e83912421d93a580575"} Nov 22 12:12:36 crc kubenswrapper[4772]: I1122 12:12:36.545577 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:36 crc kubenswrapper[4772]: I1122 12:12:36.562094 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" podStartSLOduration=3.562074693 podStartE2EDuration="3.562074693s" podCreationTimestamp="2025-11-22 12:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:12:36.559118599 +0000 UTC m=+5676.798563103" watchObservedRunningTime="2025-11-22 12:12:36.562074693 +0000 UTC m=+5676.801519187" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.055692 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.136876 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6645986957-d4fbm"] Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.137546 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6645986957-d4fbm" podUID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerName="dnsmasq-dns" containerID="cri-o://3a381bf4afebb5ed3cc848428d6206696d4417f29f7557b51d7da0bfa88f56ca" gracePeriod=10 Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.625954 4772 generic.go:334] "Generic (PLEG): container finished" podID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerID="3a381bf4afebb5ed3cc848428d6206696d4417f29f7557b51d7da0bfa88f56ca" exitCode=0 Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.626742 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6645986957-d4fbm" event={"ID":"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f","Type":"ContainerDied","Data":"3a381bf4afebb5ed3cc848428d6206696d4417f29f7557b51d7da0bfa88f56ca"} Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.715262 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.841797 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-sb\") pod \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.842184 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lskwn\" (UniqueName: \"kubernetes.io/projected/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-kube-api-access-lskwn\") pod \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.842340 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-nb\") pod \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.842635 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-config\") pod \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.842831 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-dns-svc\") pod \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\" (UID: \"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f\") " Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.852402 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-kube-api-access-lskwn" (OuterVolumeSpecName: "kube-api-access-lskwn") pod "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" (UID: "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f"). InnerVolumeSpecName "kube-api-access-lskwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.892963 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" (UID: "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.894485 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" (UID: "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.895542 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" (UID: "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.904362 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-config" (OuterVolumeSpecName: "config") pod "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" (UID: "cd2377a6-3164-4bc6-baf9-ff1a45d72e2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.944999 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lskwn\" (UniqueName: \"kubernetes.io/projected/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-kube-api-access-lskwn\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.945042 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.945071 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.945082 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:44 crc kubenswrapper[4772]: I1122 12:12:44.945093 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:45 crc kubenswrapper[4772]: I1122 12:12:45.639556 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6645986957-d4fbm" event={"ID":"cd2377a6-3164-4bc6-baf9-ff1a45d72e2f","Type":"ContainerDied","Data":"b4fc8f6603e613ef49e5a361fbb00b87fa75c5b2d7ca84bcffd0dada841a3198"} Nov 22 12:12:45 crc kubenswrapper[4772]: I1122 12:12:45.639924 4772 scope.go:117] "RemoveContainer" containerID="3a381bf4afebb5ed3cc848428d6206696d4417f29f7557b51d7da0bfa88f56ca" Nov 22 12:12:45 crc kubenswrapper[4772]: I1122 12:12:45.639808 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6645986957-d4fbm" Nov 22 12:12:45 crc kubenswrapper[4772]: I1122 12:12:45.669864 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6645986957-d4fbm"] Nov 22 12:12:45 crc kubenswrapper[4772]: I1122 12:12:45.679571 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6645986957-d4fbm"] Nov 22 12:12:45 crc kubenswrapper[4772]: I1122 12:12:45.685033 4772 scope.go:117] "RemoveContainer" containerID="629ee177b629698367fc8581ca4ecc7a964168f04717f01e5e36129f91b5382c" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.430722 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" path="/var/lib/kubelet/pods/cd2377a6-3164-4bc6-baf9-ff1a45d72e2f/volumes" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.449294 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dvsv9"] Nov 22 12:12:47 crc kubenswrapper[4772]: E1122 12:12:47.449875 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerName="init" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.449895 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerName="init" Nov 22 12:12:47 crc kubenswrapper[4772]: E1122 12:12:47.449910 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerName="dnsmasq-dns" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.449918 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerName="dnsmasq-dns" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.450141 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd2377a6-3164-4bc6-baf9-ff1a45d72e2f" containerName="dnsmasq-dns" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.450808 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvsv9" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.459597 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dvsv9"] Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.602485 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwfnn\" (UniqueName: \"kubernetes.io/projected/e7afba77-d20a-40cd-8530-37d6aea1e247-kube-api-access-fwfnn\") pod \"cinder-db-create-dvsv9\" (UID: \"e7afba77-d20a-40cd-8530-37d6aea1e247\") " pod="openstack/cinder-db-create-dvsv9" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.704379 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwfnn\" (UniqueName: \"kubernetes.io/projected/e7afba77-d20a-40cd-8530-37d6aea1e247-kube-api-access-fwfnn\") pod \"cinder-db-create-dvsv9\" (UID: \"e7afba77-d20a-40cd-8530-37d6aea1e247\") " pod="openstack/cinder-db-create-dvsv9" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.723881 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwfnn\" (UniqueName: \"kubernetes.io/projected/e7afba77-d20a-40cd-8530-37d6aea1e247-kube-api-access-fwfnn\") pod \"cinder-db-create-dvsv9\" (UID: \"e7afba77-d20a-40cd-8530-37d6aea1e247\") " pod="openstack/cinder-db-create-dvsv9" Nov 22 12:12:47 crc kubenswrapper[4772]: I1122 12:12:47.770070 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvsv9" Nov 22 12:12:48 crc kubenswrapper[4772]: I1122 12:12:48.226181 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dvsv9"] Nov 22 12:12:48 crc kubenswrapper[4772]: I1122 12:12:48.672204 4772 generic.go:334] "Generic (PLEG): container finished" podID="e7afba77-d20a-40cd-8530-37d6aea1e247" containerID="3bfebb8c105b234595fdd402c4f043a886a0ce4db74468851391059f2c9a7e4b" exitCode=0 Nov 22 12:12:48 crc kubenswrapper[4772]: I1122 12:12:48.672262 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvsv9" event={"ID":"e7afba77-d20a-40cd-8530-37d6aea1e247","Type":"ContainerDied","Data":"3bfebb8c105b234595fdd402c4f043a886a0ce4db74468851391059f2c9a7e4b"} Nov 22 12:12:48 crc kubenswrapper[4772]: I1122 12:12:48.672486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvsv9" event={"ID":"e7afba77-d20a-40cd-8530-37d6aea1e247","Type":"ContainerStarted","Data":"7ee20ddb057a774cd771a0e55679fc126be6bcf86b7f81f2adf46bc738fff2d9"} Nov 22 12:12:50 crc kubenswrapper[4772]: I1122 12:12:50.101894 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvsv9" Nov 22 12:12:50 crc kubenswrapper[4772]: I1122 12:12:50.258763 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwfnn\" (UniqueName: \"kubernetes.io/projected/e7afba77-d20a-40cd-8530-37d6aea1e247-kube-api-access-fwfnn\") pod \"e7afba77-d20a-40cd-8530-37d6aea1e247\" (UID: \"e7afba77-d20a-40cd-8530-37d6aea1e247\") " Nov 22 12:12:50 crc kubenswrapper[4772]: I1122 12:12:50.263921 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7afba77-d20a-40cd-8530-37d6aea1e247-kube-api-access-fwfnn" (OuterVolumeSpecName: "kube-api-access-fwfnn") pod "e7afba77-d20a-40cd-8530-37d6aea1e247" (UID: "e7afba77-d20a-40cd-8530-37d6aea1e247"). InnerVolumeSpecName "kube-api-access-fwfnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:12:50 crc kubenswrapper[4772]: I1122 12:12:50.360744 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwfnn\" (UniqueName: \"kubernetes.io/projected/e7afba77-d20a-40cd-8530-37d6aea1e247-kube-api-access-fwfnn\") on node \"crc\" DevicePath \"\"" Nov 22 12:12:50 crc kubenswrapper[4772]: I1122 12:12:50.693134 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvsv9" event={"ID":"e7afba77-d20a-40cd-8530-37d6aea1e247","Type":"ContainerDied","Data":"7ee20ddb057a774cd771a0e55679fc126be6bcf86b7f81f2adf46bc738fff2d9"} Nov 22 12:12:50 crc kubenswrapper[4772]: I1122 12:12:50.693176 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ee20ddb057a774cd771a0e55679fc126be6bcf86b7f81f2adf46bc738fff2d9" Nov 22 12:12:50 crc kubenswrapper[4772]: I1122 12:12:50.693759 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvsv9" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.614363 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4946-account-create-fk7nf"] Nov 22 12:12:57 crc kubenswrapper[4772]: E1122 12:12:57.616180 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7afba77-d20a-40cd-8530-37d6aea1e247" containerName="mariadb-database-create" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.616204 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7afba77-d20a-40cd-8530-37d6aea1e247" containerName="mariadb-database-create" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.617351 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7afba77-d20a-40cd-8530-37d6aea1e247" containerName="mariadb-database-create" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.618532 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4946-account-create-fk7nf" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.621756 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.642863 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4946-account-create-fk7nf"] Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.729412 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4tq\" (UniqueName: \"kubernetes.io/projected/2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca-kube-api-access-dq4tq\") pod \"cinder-4946-account-create-fk7nf\" (UID: \"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca\") " pod="openstack/cinder-4946-account-create-fk7nf" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.833591 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq4tq\" (UniqueName: \"kubernetes.io/projected/2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca-kube-api-access-dq4tq\") pod \"cinder-4946-account-create-fk7nf\" (UID: \"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca\") " pod="openstack/cinder-4946-account-create-fk7nf" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.856028 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq4tq\" (UniqueName: \"kubernetes.io/projected/2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca-kube-api-access-dq4tq\") pod \"cinder-4946-account-create-fk7nf\" (UID: \"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca\") " pod="openstack/cinder-4946-account-create-fk7nf" Nov 22 12:12:57 crc kubenswrapper[4772]: I1122 12:12:57.947553 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4946-account-create-fk7nf" Nov 22 12:12:58 crc kubenswrapper[4772]: I1122 12:12:58.460152 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4946-account-create-fk7nf"] Nov 22 12:12:58 crc kubenswrapper[4772]: I1122 12:12:58.771086 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4946-account-create-fk7nf" event={"ID":"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca","Type":"ContainerStarted","Data":"c7908ab27ab2525cd394455e4e8089e30bd929a25cfe430274143a57b4c8c36f"} Nov 22 12:12:58 crc kubenswrapper[4772]: I1122 12:12:58.771454 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4946-account-create-fk7nf" event={"ID":"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca","Type":"ContainerStarted","Data":"1cc7a289710c4c8d6a6eb4a1d5e9b81bb187972631c5409919096678e87a93a9"} Nov 22 12:12:58 crc kubenswrapper[4772]: I1122 12:12:58.797823 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-4946-account-create-fk7nf" podStartSLOduration=1.797794225 podStartE2EDuration="1.797794225s" podCreationTimestamp="2025-11-22 12:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:12:58.788286368 +0000 UTC m=+5699.027730882" watchObservedRunningTime="2025-11-22 12:12:58.797794225 +0000 UTC m=+5699.037238719" Nov 22 12:12:59 crc kubenswrapper[4772]: I1122 12:12:59.783414 4772 generic.go:334] "Generic (PLEG): container finished" podID="2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca" containerID="c7908ab27ab2525cd394455e4e8089e30bd929a25cfe430274143a57b4c8c36f" exitCode=0 Nov 22 12:12:59 crc kubenswrapper[4772]: I1122 12:12:59.783469 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4946-account-create-fk7nf" event={"ID":"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca","Type":"ContainerDied","Data":"c7908ab27ab2525cd394455e4e8089e30bd929a25cfe430274143a57b4c8c36f"} Nov 22 12:13:01 crc kubenswrapper[4772]: I1122 12:13:01.171686 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4946-account-create-fk7nf" Nov 22 12:13:01 crc kubenswrapper[4772]: I1122 12:13:01.311286 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq4tq\" (UniqueName: \"kubernetes.io/projected/2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca-kube-api-access-dq4tq\") pod \"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca\" (UID: \"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca\") " Nov 22 12:13:01 crc kubenswrapper[4772]: I1122 12:13:01.318078 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca-kube-api-access-dq4tq" (OuterVolumeSpecName: "kube-api-access-dq4tq") pod "2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca" (UID: "2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca"). InnerVolumeSpecName "kube-api-access-dq4tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:01 crc kubenswrapper[4772]: I1122 12:13:01.413025 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq4tq\" (UniqueName: \"kubernetes.io/projected/2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca-kube-api-access-dq4tq\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:01 crc kubenswrapper[4772]: I1122 12:13:01.814395 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4946-account-create-fk7nf" event={"ID":"2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca","Type":"ContainerDied","Data":"1cc7a289710c4c8d6a6eb4a1d5e9b81bb187972631c5409919096678e87a93a9"} Nov 22 12:13:01 crc kubenswrapper[4772]: I1122 12:13:01.814460 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc7a289710c4c8d6a6eb4a1d5e9b81bb187972631c5409919096678e87a93a9" Nov 22 12:13:01 crc kubenswrapper[4772]: I1122 12:13:01.814535 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4946-account-create-fk7nf" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.746518 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hwv2m"] Nov 22 12:13:02 crc kubenswrapper[4772]: E1122 12:13:02.747489 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca" containerName="mariadb-account-create" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.747508 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca" containerName="mariadb-account-create" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.747715 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca" containerName="mariadb-account-create" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.748490 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.751173 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-z7vsh" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.751370 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.752332 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.755120 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hwv2m"] Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.840284 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-combined-ca-bundle\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.840374 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-db-sync-config-data\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.840481 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-scripts\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.840514 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-config-data\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.840546 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b03d07-82d5-41b8-852c-79757a1f76c2-etc-machine-id\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.840905 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt24t\" (UniqueName: \"kubernetes.io/projected/f7b03d07-82d5-41b8-852c-79757a1f76c2-kube-api-access-wt24t\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.943573 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt24t\" (UniqueName: \"kubernetes.io/projected/f7b03d07-82d5-41b8-852c-79757a1f76c2-kube-api-access-wt24t\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.943646 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-combined-ca-bundle\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.943685 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-db-sync-config-data\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.943729 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-scripts\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.943753 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-config-data\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.943778 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b03d07-82d5-41b8-852c-79757a1f76c2-etc-machine-id\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.943916 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b03d07-82d5-41b8-852c-79757a1f76c2-etc-machine-id\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.951521 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-scripts\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.951521 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-db-sync-config-data\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.953762 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-combined-ca-bundle\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.959571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-config-data\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:02 crc kubenswrapper[4772]: I1122 12:13:02.965233 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt24t\" (UniqueName: \"kubernetes.io/projected/f7b03d07-82d5-41b8-852c-79757a1f76c2-kube-api-access-wt24t\") pod \"cinder-db-sync-hwv2m\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:03 crc kubenswrapper[4772]: I1122 12:13:03.064101 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:03 crc kubenswrapper[4772]: I1122 12:13:03.543834 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hwv2m"] Nov 22 12:13:03 crc kubenswrapper[4772]: W1122 12:13:03.551766 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7b03d07_82d5_41b8_852c_79757a1f76c2.slice/crio-fd1fd16d9eeb6082929a72a8a01d8d11345b7e2303fe6320b9aecdb610957382 WatchSource:0}: Error finding container fd1fd16d9eeb6082929a72a8a01d8d11345b7e2303fe6320b9aecdb610957382: Status 404 returned error can't find the container with id fd1fd16d9eeb6082929a72a8a01d8d11345b7e2303fe6320b9aecdb610957382 Nov 22 12:13:03 crc kubenswrapper[4772]: I1122 12:13:03.835589 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hwv2m" event={"ID":"f7b03d07-82d5-41b8-852c-79757a1f76c2","Type":"ContainerStarted","Data":"fd1fd16d9eeb6082929a72a8a01d8d11345b7e2303fe6320b9aecdb610957382"} Nov 22 12:13:04 crc kubenswrapper[4772]: I1122 12:13:04.851289 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hwv2m" event={"ID":"f7b03d07-82d5-41b8-852c-79757a1f76c2","Type":"ContainerStarted","Data":"12c7e2698d79fbef5b57f5a9d40db2c7289cf3c1463271b0a5d1bb124b68a498"} Nov 22 12:13:07 crc kubenswrapper[4772]: I1122 12:13:07.883436 4772 generic.go:334] "Generic (PLEG): container finished" podID="f7b03d07-82d5-41b8-852c-79757a1f76c2" containerID="12c7e2698d79fbef5b57f5a9d40db2c7289cf3c1463271b0a5d1bb124b68a498" exitCode=0 Nov 22 12:13:07 crc kubenswrapper[4772]: I1122 12:13:07.883527 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hwv2m" event={"ID":"f7b03d07-82d5-41b8-852c-79757a1f76c2","Type":"ContainerDied","Data":"12c7e2698d79fbef5b57f5a9d40db2c7289cf3c1463271b0a5d1bb124b68a498"} Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.288415 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379104 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-combined-ca-bundle\") pod \"f7b03d07-82d5-41b8-852c-79757a1f76c2\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379185 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-db-sync-config-data\") pod \"f7b03d07-82d5-41b8-852c-79757a1f76c2\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379223 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-scripts\") pod \"f7b03d07-82d5-41b8-852c-79757a1f76c2\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379351 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b03d07-82d5-41b8-852c-79757a1f76c2-etc-machine-id\") pod \"f7b03d07-82d5-41b8-852c-79757a1f76c2\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379405 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt24t\" (UniqueName: \"kubernetes.io/projected/f7b03d07-82d5-41b8-852c-79757a1f76c2-kube-api-access-wt24t\") pod \"f7b03d07-82d5-41b8-852c-79757a1f76c2\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379428 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-config-data\") pod \"f7b03d07-82d5-41b8-852c-79757a1f76c2\" (UID: \"f7b03d07-82d5-41b8-852c-79757a1f76c2\") " Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379481 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7b03d07-82d5-41b8-852c-79757a1f76c2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f7b03d07-82d5-41b8-852c-79757a1f76c2" (UID: "f7b03d07-82d5-41b8-852c-79757a1f76c2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.379859 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7b03d07-82d5-41b8-852c-79757a1f76c2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.385508 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-scripts" (OuterVolumeSpecName: "scripts") pod "f7b03d07-82d5-41b8-852c-79757a1f76c2" (UID: "f7b03d07-82d5-41b8-852c-79757a1f76c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.385779 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f7b03d07-82d5-41b8-852c-79757a1f76c2" (UID: "f7b03d07-82d5-41b8-852c-79757a1f76c2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.386202 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b03d07-82d5-41b8-852c-79757a1f76c2-kube-api-access-wt24t" (OuterVolumeSpecName: "kube-api-access-wt24t") pod "f7b03d07-82d5-41b8-852c-79757a1f76c2" (UID: "f7b03d07-82d5-41b8-852c-79757a1f76c2"). InnerVolumeSpecName "kube-api-access-wt24t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.421727 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7b03d07-82d5-41b8-852c-79757a1f76c2" (UID: "f7b03d07-82d5-41b8-852c-79757a1f76c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.439207 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-config-data" (OuterVolumeSpecName: "config-data") pod "f7b03d07-82d5-41b8-852c-79757a1f76c2" (UID: "f7b03d07-82d5-41b8-852c-79757a1f76c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.481759 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.481798 4772 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.481807 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.481817 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt24t\" (UniqueName: \"kubernetes.io/projected/f7b03d07-82d5-41b8-852c-79757a1f76c2-kube-api-access-wt24t\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.481830 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b03d07-82d5-41b8-852c-79757a1f76c2-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.903741 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hwv2m" event={"ID":"f7b03d07-82d5-41b8-852c-79757a1f76c2","Type":"ContainerDied","Data":"fd1fd16d9eeb6082929a72a8a01d8d11345b7e2303fe6320b9aecdb610957382"} Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.903788 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd1fd16d9eeb6082929a72a8a01d8d11345b7e2303fe6320b9aecdb610957382" Nov 22 12:13:09 crc kubenswrapper[4772]: I1122 12:13:09.903799 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hwv2m" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.236008 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f75b897c-6g6hs"] Nov 22 12:13:10 crc kubenswrapper[4772]: E1122 12:13:10.236543 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b03d07-82d5-41b8-852c-79757a1f76c2" containerName="cinder-db-sync" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.236562 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b03d07-82d5-41b8-852c-79757a1f76c2" containerName="cinder-db-sync" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.236737 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b03d07-82d5-41b8-852c-79757a1f76c2" containerName="cinder-db-sync" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.237731 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.248598 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f75b897c-6g6hs"] Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.347626 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.361462 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.361627 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.365682 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-z7vsh" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.365753 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.366426 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.369487 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.401808 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-sb\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.401940 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/759669a7-3328-4d4f-b4bb-661824149475-kube-api-access-mn7q6\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.401994 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-config\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.402069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-nb\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.402086 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-dns-svc\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.503481 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data-custom\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.503585 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d148b-078c-4ff8-8d8a-12f90fd9c880-etc-machine-id\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.504576 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/759669a7-3328-4d4f-b4bb-661824149475-kube-api-access-mn7q6\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.504998 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.505131 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-config\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.505197 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.505546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-nb\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.505574 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-dns-svc\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.506588 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-dns-svc\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.506615 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8h2c\" (UniqueName: \"kubernetes.io/projected/310d148b-078c-4ff8-8d8a-12f90fd9c880-kube-api-access-d8h2c\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.506526 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-nb\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.506260 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-config\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.506743 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-sb\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.506781 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-scripts\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.506820 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d148b-078c-4ff8-8d8a-12f90fd9c880-logs\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.507634 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-sb\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.531936 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/759669a7-3328-4d4f-b4bb-661824149475-kube-api-access-mn7q6\") pod \"dnsmasq-dns-5f75b897c-6g6hs\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610096 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610161 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-scripts\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610235 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d148b-078c-4ff8-8d8a-12f90fd9c880-logs\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610280 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data-custom\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610400 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d148b-078c-4ff8-8d8a-12f90fd9c880-etc-machine-id\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610470 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610660 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8h2c\" (UniqueName: \"kubernetes.io/projected/310d148b-078c-4ff8-8d8a-12f90fd9c880-kube-api-access-d8h2c\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.610980 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d148b-078c-4ff8-8d8a-12f90fd9c880-etc-machine-id\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.611452 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d148b-078c-4ff8-8d8a-12f90fd9c880-logs\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.615493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.615840 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-scripts\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.621115 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.625766 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data-custom\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.633330 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8h2c\" (UniqueName: \"kubernetes.io/projected/310d148b-078c-4ff8-8d8a-12f90fd9c880-kube-api-access-d8h2c\") pod \"cinder-api-0\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " pod="openstack/cinder-api-0" Nov 22 12:13:10 crc kubenswrapper[4772]: I1122 12:13:10.697760 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 12:13:11 crc kubenswrapper[4772]: I1122 12:13:11.174403 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f75b897c-6g6hs"] Nov 22 12:13:11 crc kubenswrapper[4772]: I1122 12:13:11.296712 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:11 crc kubenswrapper[4772]: W1122 12:13:11.298794 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod310d148b_078c_4ff8_8d8a_12f90fd9c880.slice/crio-4eb899776d8a53b672306c8c2497d09c29dd7a57072c788a449a1bcf5a383969 WatchSource:0}: Error finding container 4eb899776d8a53b672306c8c2497d09c29dd7a57072c788a449a1bcf5a383969: Status 404 returned error can't find the container with id 4eb899776d8a53b672306c8c2497d09c29dd7a57072c788a449a1bcf5a383969 Nov 22 12:13:11 crc kubenswrapper[4772]: I1122 12:13:11.929897 4772 generic.go:334] "Generic (PLEG): container finished" podID="759669a7-3328-4d4f-b4bb-661824149475" containerID="7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378" exitCode=0 Nov 22 12:13:11 crc kubenswrapper[4772]: I1122 12:13:11.929945 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" event={"ID":"759669a7-3328-4d4f-b4bb-661824149475","Type":"ContainerDied","Data":"7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378"} Nov 22 12:13:11 crc kubenswrapper[4772]: I1122 12:13:11.930267 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" event={"ID":"759669a7-3328-4d4f-b4bb-661824149475","Type":"ContainerStarted","Data":"3f24ff394403b7f3c0276461ac01cf07415db9c2d922cdc004730582da93a53f"} Nov 22 12:13:11 crc kubenswrapper[4772]: I1122 12:13:11.933232 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d148b-078c-4ff8-8d8a-12f90fd9c880","Type":"ContainerStarted","Data":"9d6d5774f134c11e38d7ccef20f4a69ec7fdd6645993146cca8997385f87e2aa"} Nov 22 12:13:11 crc kubenswrapper[4772]: I1122 12:13:11.933385 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d148b-078c-4ff8-8d8a-12f90fd9c880","Type":"ContainerStarted","Data":"4eb899776d8a53b672306c8c2497d09c29dd7a57072c788a449a1bcf5a383969"} Nov 22 12:13:12 crc kubenswrapper[4772]: I1122 12:13:12.943764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d148b-078c-4ff8-8d8a-12f90fd9c880","Type":"ContainerStarted","Data":"210fe5a88ae34302aa488c93e335869582fd4cd8fb45b9f07e1c657433ce2c44"} Nov 22 12:13:12 crc kubenswrapper[4772]: I1122 12:13:12.944157 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 12:13:12 crc kubenswrapper[4772]: I1122 12:13:12.945870 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" event={"ID":"759669a7-3328-4d4f-b4bb-661824149475","Type":"ContainerStarted","Data":"67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41"} Nov 22 12:13:12 crc kubenswrapper[4772]: I1122 12:13:12.946033 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:12 crc kubenswrapper[4772]: I1122 12:13:12.972869 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.9728456640000003 podStartE2EDuration="2.972845664s" podCreationTimestamp="2025-11-22 12:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:12.962746423 +0000 UTC m=+5713.202190917" watchObservedRunningTime="2025-11-22 12:13:12.972845664 +0000 UTC m=+5713.212290158" Nov 22 12:13:12 crc kubenswrapper[4772]: I1122 12:13:12.984021 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" podStartSLOduration=2.9839999219999997 podStartE2EDuration="2.983999922s" podCreationTimestamp="2025-11-22 12:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:12.978939326 +0000 UTC m=+5713.218383820" watchObservedRunningTime="2025-11-22 12:13:12.983999922 +0000 UTC m=+5713.223444416" Nov 22 12:13:20 crc kubenswrapper[4772]: I1122 12:13:20.612216 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:13:20 crc kubenswrapper[4772]: I1122 12:13:20.671031 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bb7467f-z97hh"] Nov 22 12:13:20 crc kubenswrapper[4772]: I1122 12:13:20.671339 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" podUID="9a28400c-54e4-428e-80ee-28e494a10303" containerName="dnsmasq-dns" containerID="cri-o://4b6ba7dc669a25f90f3515b5a608a72c038b82c343d60e83912421d93a580575" gracePeriod=10 Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.050262 4772 generic.go:334] "Generic (PLEG): container finished" podID="9a28400c-54e4-428e-80ee-28e494a10303" containerID="4b6ba7dc669a25f90f3515b5a608a72c038b82c343d60e83912421d93a580575" exitCode=0 Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.050666 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" event={"ID":"9a28400c-54e4-428e-80ee-28e494a10303","Type":"ContainerDied","Data":"4b6ba7dc669a25f90f3515b5a608a72c038b82c343d60e83912421d93a580575"} Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.313381 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.436252 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-nb\") pod \"9a28400c-54e4-428e-80ee-28e494a10303\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.436720 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-sb\") pod \"9a28400c-54e4-428e-80ee-28e494a10303\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.437452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gksxm\" (UniqueName: \"kubernetes.io/projected/9a28400c-54e4-428e-80ee-28e494a10303-kube-api-access-gksxm\") pod \"9a28400c-54e4-428e-80ee-28e494a10303\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.437535 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-config\") pod \"9a28400c-54e4-428e-80ee-28e494a10303\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.437795 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-dns-svc\") pod \"9a28400c-54e4-428e-80ee-28e494a10303\" (UID: \"9a28400c-54e4-428e-80ee-28e494a10303\") " Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.443295 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a28400c-54e4-428e-80ee-28e494a10303-kube-api-access-gksxm" (OuterVolumeSpecName: "kube-api-access-gksxm") pod "9a28400c-54e4-428e-80ee-28e494a10303" (UID: "9a28400c-54e4-428e-80ee-28e494a10303"). InnerVolumeSpecName "kube-api-access-gksxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.490500 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a28400c-54e4-428e-80ee-28e494a10303" (UID: "9a28400c-54e4-428e-80ee-28e494a10303"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.491383 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a28400c-54e4-428e-80ee-28e494a10303" (UID: "9a28400c-54e4-428e-80ee-28e494a10303"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.492929 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-config" (OuterVolumeSpecName: "config") pod "9a28400c-54e4-428e-80ee-28e494a10303" (UID: "9a28400c-54e4-428e-80ee-28e494a10303"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.524215 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a28400c-54e4-428e-80ee-28e494a10303" (UID: "9a28400c-54e4-428e-80ee-28e494a10303"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.539982 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.540017 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.540026 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.540036 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a28400c-54e4-428e-80ee-28e494a10303-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:21 crc kubenswrapper[4772]: I1122 12:13:21.540060 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gksxm\" (UniqueName: \"kubernetes.io/projected/9a28400c-54e4-428e-80ee-28e494a10303-kube-api-access-gksxm\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.063904 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" event={"ID":"9a28400c-54e4-428e-80ee-28e494a10303","Type":"ContainerDied","Data":"f7f714b2d51886dcc433363f7f9fa795952ea2daa03f550b5498609c64046ed5"} Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.063959 4772 scope.go:117] "RemoveContainer" containerID="4b6ba7dc669a25f90f3515b5a608a72c038b82c343d60e83912421d93a580575" Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.064037 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bb7467f-z97hh" Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.103413 4772 scope.go:117] "RemoveContainer" containerID="fbde4ab41f54bba3c0306628c33c67309cf47bf2b5885c814b855671f2caa6be" Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.135177 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bb7467f-z97hh"] Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.144347 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bb7467f-z97hh"] Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.269407 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.269664 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="442f4ec5-4bf2-475d-85c0-fddd99d233d7" containerName="nova-cell0-conductor-conductor" containerID="cri-o://6d0a8f6cd2301e34cd099092d2d30d82dec905a57947b14494d3fffac45b7a18" gracePeriod=30 Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.285761 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.286385 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f3902926-0afb-4b6b-bd40-2a63cea31961" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5f13128f4f1030ee3c875872b042914ce4ca921ce771380dd581329dd9be8ec2" gracePeriod=30 Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.300589 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.303519 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-log" containerID="cri-o://7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19" gracePeriod=30 Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.303702 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-metadata" containerID="cri-o://82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58" gracePeriod=30 Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.318522 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.319352 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-log" containerID="cri-o://87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9" gracePeriod=30 Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.319872 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-api" containerID="cri-o://684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84" gracePeriod=30 Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.342496 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.348770 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0979b32f-aa79-489a-b5ec-ccfa90e847af" containerName="nova-scheduler-scheduler" containerID="cri-o://4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f" gracePeriod=30 Nov 22 12:13:22 crc kubenswrapper[4772]: E1122 12:13:22.697472 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 12:13:22 crc kubenswrapper[4772]: E1122 12:13:22.699149 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 12:13:22 crc kubenswrapper[4772]: E1122 12:13:22.700459 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 12:13:22 crc kubenswrapper[4772]: E1122 12:13:22.700508 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0979b32f-aa79-489a-b5ec-ccfa90e847af" containerName="nova-scheduler-scheduler" Nov 22 12:13:22 crc kubenswrapper[4772]: I1122 12:13:22.999333 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.086457 4772 generic.go:334] "Generic (PLEG): container finished" podID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerID="7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19" exitCode=143 Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.086558 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83115fda-d180-46ed-b9f0-60ad3bfb6707","Type":"ContainerDied","Data":"7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19"} Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.101546 4772 generic.go:334] "Generic (PLEG): container finished" podID="f3902926-0afb-4b6b-bd40-2a63cea31961" containerID="5f13128f4f1030ee3c875872b042914ce4ca921ce771380dd581329dd9be8ec2" exitCode=0 Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.101636 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f3902926-0afb-4b6b-bd40-2a63cea31961","Type":"ContainerDied","Data":"5f13128f4f1030ee3c875872b042914ce4ca921ce771380dd581329dd9be8ec2"} Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.104164 4772 generic.go:334] "Generic (PLEG): container finished" podID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerID="87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9" exitCode=143 Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.104189 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48d3f16f-387d-4fd0-8e35-3512f30a5f63","Type":"ContainerDied","Data":"87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9"} Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.221072 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.272166 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-combined-ca-bundle\") pod \"f3902926-0afb-4b6b-bd40-2a63cea31961\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.272214 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrv6x\" (UniqueName: \"kubernetes.io/projected/f3902926-0afb-4b6b-bd40-2a63cea31961-kube-api-access-vrv6x\") pod \"f3902926-0afb-4b6b-bd40-2a63cea31961\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.272308 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-config-data\") pod \"f3902926-0afb-4b6b-bd40-2a63cea31961\" (UID: \"f3902926-0afb-4b6b-bd40-2a63cea31961\") " Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.277626 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3902926-0afb-4b6b-bd40-2a63cea31961-kube-api-access-vrv6x" (OuterVolumeSpecName: "kube-api-access-vrv6x") pod "f3902926-0afb-4b6b-bd40-2a63cea31961" (UID: "f3902926-0afb-4b6b-bd40-2a63cea31961"). InnerVolumeSpecName "kube-api-access-vrv6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.303640 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-config-data" (OuterVolumeSpecName: "config-data") pod "f3902926-0afb-4b6b-bd40-2a63cea31961" (UID: "f3902926-0afb-4b6b-bd40-2a63cea31961"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.316347 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3902926-0afb-4b6b-bd40-2a63cea31961" (UID: "f3902926-0afb-4b6b-bd40-2a63cea31961"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.374274 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.374324 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3902926-0afb-4b6b-bd40-2a63cea31961-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.374338 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrv6x\" (UniqueName: \"kubernetes.io/projected/f3902926-0afb-4b6b-bd40-2a63cea31961-kube-api-access-vrv6x\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:23 crc kubenswrapper[4772]: I1122 12:13:23.425793 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a28400c-54e4-428e-80ee-28e494a10303" path="/var/lib/kubelet/pods/9a28400c-54e4-428e-80ee-28e494a10303/volumes" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.116844 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.116952 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f3902926-0afb-4b6b-bd40-2a63cea31961","Type":"ContainerDied","Data":"c60fb6298072a0995ecc14b35a78062ee48f38cf8fee1114bf3dfd23b0b05997"} Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.117079 4772 scope.go:117] "RemoveContainer" containerID="5f13128f4f1030ee3c875872b042914ce4ca921ce771380dd581329dd9be8ec2" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.126245 4772 generic.go:334] "Generic (PLEG): container finished" podID="442f4ec5-4bf2-475d-85c0-fddd99d233d7" containerID="6d0a8f6cd2301e34cd099092d2d30d82dec905a57947b14494d3fffac45b7a18" exitCode=0 Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.126287 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"442f4ec5-4bf2-475d-85c0-fddd99d233d7","Type":"ContainerDied","Data":"6d0a8f6cd2301e34cd099092d2d30d82dec905a57947b14494d3fffac45b7a18"} Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.126312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"442f4ec5-4bf2-475d-85c0-fddd99d233d7","Type":"ContainerDied","Data":"ba9b57f6a6826f842bc331d5d90dccb3fcf24709fd3e6d4beef66e9d5a754106"} Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.126323 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9b57f6a6826f842bc331d5d90dccb3fcf24709fd3e6d4beef66e9d5a754106" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.150325 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.185176 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.189801 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-combined-ca-bundle\") pod \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.189886 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-config-data\") pod \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.189926 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-224n2\" (UniqueName: \"kubernetes.io/projected/442f4ec5-4bf2-475d-85c0-fddd99d233d7-kube-api-access-224n2\") pod \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\" (UID: \"442f4ec5-4bf2-475d-85c0-fddd99d233d7\") " Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.202233 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.203363 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442f4ec5-4bf2-475d-85c0-fddd99d233d7-kube-api-access-224n2" (OuterVolumeSpecName: "kube-api-access-224n2") pod "442f4ec5-4bf2-475d-85c0-fddd99d233d7" (UID: "442f4ec5-4bf2-475d-85c0-fddd99d233d7"). InnerVolumeSpecName "kube-api-access-224n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.210818 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:13:24 crc kubenswrapper[4772]: E1122 12:13:24.214570 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442f4ec5-4bf2-475d-85c0-fddd99d233d7" containerName="nova-cell0-conductor-conductor" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.214627 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="442f4ec5-4bf2-475d-85c0-fddd99d233d7" containerName="nova-cell0-conductor-conductor" Nov 22 12:13:24 crc kubenswrapper[4772]: E1122 12:13:24.214649 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3902926-0afb-4b6b-bd40-2a63cea31961" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.214674 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3902926-0afb-4b6b-bd40-2a63cea31961" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 12:13:24 crc kubenswrapper[4772]: E1122 12:13:24.214691 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a28400c-54e4-428e-80ee-28e494a10303" containerName="init" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.214698 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a28400c-54e4-428e-80ee-28e494a10303" containerName="init" Nov 22 12:13:24 crc kubenswrapper[4772]: E1122 12:13:24.214716 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a28400c-54e4-428e-80ee-28e494a10303" containerName="dnsmasq-dns" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.214722 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a28400c-54e4-428e-80ee-28e494a10303" containerName="dnsmasq-dns" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.214942 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="442f4ec5-4bf2-475d-85c0-fddd99d233d7" containerName="nova-cell0-conductor-conductor" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.214963 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3902926-0afb-4b6b-bd40-2a63cea31961" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.214990 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a28400c-54e4-428e-80ee-28e494a10303" containerName="dnsmasq-dns" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.215686 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.219095 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.235015 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-config-data" (OuterVolumeSpecName: "config-data") pod "442f4ec5-4bf2-475d-85c0-fddd99d233d7" (UID: "442f4ec5-4bf2-475d-85c0-fddd99d233d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.236668 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.260332 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "442f4ec5-4bf2-475d-85c0-fddd99d233d7" (UID: "442f4ec5-4bf2-475d-85c0-fddd99d233d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.292788 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgc2\" (UniqueName: \"kubernetes.io/projected/b0338152-6fc6-4c47-9f8f-49239851a5d4-kube-api-access-2sgc2\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.292908 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0338152-6fc6-4c47-9f8f-49239851a5d4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.292928 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0338152-6fc6-4c47-9f8f-49239851a5d4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.293287 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.293301 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442f4ec5-4bf2-475d-85c0-fddd99d233d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.293309 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-224n2\" (UniqueName: \"kubernetes.io/projected/442f4ec5-4bf2-475d-85c0-fddd99d233d7-kube-api-access-224n2\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.394497 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgc2\" (UniqueName: \"kubernetes.io/projected/b0338152-6fc6-4c47-9f8f-49239851a5d4-kube-api-access-2sgc2\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.394560 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0338152-6fc6-4c47-9f8f-49239851a5d4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.394579 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0338152-6fc6-4c47-9f8f-49239851a5d4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.397937 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0338152-6fc6-4c47-9f8f-49239851a5d4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.407708 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0338152-6fc6-4c47-9f8f-49239851a5d4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.410059 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgc2\" (UniqueName: \"kubernetes.io/projected/b0338152-6fc6-4c47-9f8f-49239851a5d4-kube-api-access-2sgc2\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0338152-6fc6-4c47-9f8f-49239851a5d4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:24 crc kubenswrapper[4772]: I1122 12:13:24.628468 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.143506 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.148815 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.308790 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.323083 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.337864 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.339134 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.342629 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.346747 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.416857 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6hl\" (UniqueName: \"kubernetes.io/projected/1b309be3-8476-41bd-a801-d93e986b8f8e-kube-api-access-4x6hl\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.416996 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.417069 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.429494 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442f4ec5-4bf2-475d-85c0-fddd99d233d7" path="/var/lib/kubelet/pods/442f4ec5-4bf2-475d-85c0-fddd99d233d7/volumes" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.431095 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3902926-0afb-4b6b-bd40-2a63cea31961" path="/var/lib/kubelet/pods/f3902926-0afb-4b6b-bd40-2a63cea31961/volumes" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.466924 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.73:8775/\": read tcp 10.217.0.2:58062->10.217.1.73:8775: read: connection reset by peer" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.467018 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.73:8775/\": read tcp 10.217.0.2:58074->10.217.1.73:8775: read: connection reset by peer" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.518933 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x6hl\" (UniqueName: \"kubernetes.io/projected/1b309be3-8476-41bd-a801-d93e986b8f8e-kube-api-access-4x6hl\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.519111 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.519177 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.525102 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.525425 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.551169 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x6hl\" (UniqueName: \"kubernetes.io/projected/1b309be3-8476-41bd-a801-d93e986b8f8e-kube-api-access-4x6hl\") pod \"nova-cell0-conductor-0\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.555124 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.555391 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="80e0105c-1be9-4fa6-b46f-263e87770142" containerName="nova-cell1-conductor-conductor" containerID="cri-o://1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f" gracePeriod=30 Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.659871 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.803578 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.926668 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-combined-ca-bundle\") pod \"83115fda-d180-46ed-b9f0-60ad3bfb6707\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.926760 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s92j\" (UniqueName: \"kubernetes.io/projected/83115fda-d180-46ed-b9f0-60ad3bfb6707-kube-api-access-7s92j\") pod \"83115fda-d180-46ed-b9f0-60ad3bfb6707\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.926789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83115fda-d180-46ed-b9f0-60ad3bfb6707-logs\") pod \"83115fda-d180-46ed-b9f0-60ad3bfb6707\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.926915 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-config-data\") pod \"83115fda-d180-46ed-b9f0-60ad3bfb6707\" (UID: \"83115fda-d180-46ed-b9f0-60ad3bfb6707\") " Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.928354 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83115fda-d180-46ed-b9f0-60ad3bfb6707-logs" (OuterVolumeSpecName: "logs") pod "83115fda-d180-46ed-b9f0-60ad3bfb6707" (UID: "83115fda-d180-46ed-b9f0-60ad3bfb6707"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.956511 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83115fda-d180-46ed-b9f0-60ad3bfb6707-kube-api-access-7s92j" (OuterVolumeSpecName: "kube-api-access-7s92j") pod "83115fda-d180-46ed-b9f0-60ad3bfb6707" (UID: "83115fda-d180-46ed-b9f0-60ad3bfb6707"). InnerVolumeSpecName "kube-api-access-7s92j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:25 crc kubenswrapper[4772]: I1122 12:13:25.972170 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-config-data" (OuterVolumeSpecName: "config-data") pod "83115fda-d180-46ed-b9f0-60ad3bfb6707" (UID: "83115fda-d180-46ed-b9f0-60ad3bfb6707"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.031737 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.032066 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s92j\" (UniqueName: \"kubernetes.io/projected/83115fda-d180-46ed-b9f0-60ad3bfb6707-kube-api-access-7s92j\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.032077 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83115fda-d180-46ed-b9f0-60ad3bfb6707-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.051850 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83115fda-d180-46ed-b9f0-60ad3bfb6707" (UID: "83115fda-d180-46ed-b9f0-60ad3bfb6707"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.133641 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83115fda-d180-46ed-b9f0-60ad3bfb6707-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.138978 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.182341 4772 generic.go:334] "Generic (PLEG): container finished" podID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerID="82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58" exitCode=0 Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.183492 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83115fda-d180-46ed-b9f0-60ad3bfb6707","Type":"ContainerDied","Data":"82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58"} Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.183609 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83115fda-d180-46ed-b9f0-60ad3bfb6707","Type":"ContainerDied","Data":"78c94eafa4fa5138052a96a735e8e89118af7057d1a376bdd2661391bee6d05c"} Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.183677 4772 scope.go:117] "RemoveContainer" containerID="82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.183876 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.197840 4772 generic.go:334] "Generic (PLEG): container finished" podID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerID="684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84" exitCode=0 Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.197933 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48d3f16f-387d-4fd0-8e35-3512f30a5f63","Type":"ContainerDied","Data":"684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84"} Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.197963 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48d3f16f-387d-4fd0-8e35-3512f30a5f63","Type":"ContainerDied","Data":"824b7d85294c3c52f8bca76acacf846be74ae8208abaa63e3e045294daa64695"} Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.198069 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.200880 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b0338152-6fc6-4c47-9f8f-49239851a5d4","Type":"ContainerStarted","Data":"b79933320e9a65241b063629db25867d67126c2e37ce2de3e78480c46da67742"} Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.200947 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b0338152-6fc6-4c47-9f8f-49239851a5d4","Type":"ContainerStarted","Data":"dd9a58eee8957ffa1694b72b77f7c48533c370503e7657e7ed48e998f5e453e1"} Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.226534 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.226518972 podStartE2EDuration="2.226518972s" podCreationTimestamp="2025-11-22 12:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:26.223693152 +0000 UTC m=+5726.463137656" watchObservedRunningTime="2025-11-22 12:13:26.226518972 +0000 UTC m=+5726.465963466" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.234668 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njnm2\" (UniqueName: \"kubernetes.io/projected/48d3f16f-387d-4fd0-8e35-3512f30a5f63-kube-api-access-njnm2\") pod \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.234789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d3f16f-387d-4fd0-8e35-3512f30a5f63-logs\") pod \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.234906 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-config-data\") pod \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.235027 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-combined-ca-bundle\") pod \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\" (UID: \"48d3f16f-387d-4fd0-8e35-3512f30a5f63\") " Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.236701 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d3f16f-387d-4fd0-8e35-3512f30a5f63-logs" (OuterVolumeSpecName: "logs") pod "48d3f16f-387d-4fd0-8e35-3512f30a5f63" (UID: "48d3f16f-387d-4fd0-8e35-3512f30a5f63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.254317 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d3f16f-387d-4fd0-8e35-3512f30a5f63-kube-api-access-njnm2" (OuterVolumeSpecName: "kube-api-access-njnm2") pod "48d3f16f-387d-4fd0-8e35-3512f30a5f63" (UID: "48d3f16f-387d-4fd0-8e35-3512f30a5f63"). InnerVolumeSpecName "kube-api-access-njnm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.277223 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48d3f16f-387d-4fd0-8e35-3512f30a5f63" (UID: "48d3f16f-387d-4fd0-8e35-3512f30a5f63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.302421 4772 scope.go:117] "RemoveContainer" containerID="7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.311131 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-config-data" (OuterVolumeSpecName: "config-data") pod "48d3f16f-387d-4fd0-8e35-3512f30a5f63" (UID: "48d3f16f-387d-4fd0-8e35-3512f30a5f63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.321297 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.323858 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.337553 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.337594 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d3f16f-387d-4fd0-8e35-3512f30a5f63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.337610 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njnm2\" (UniqueName: \"kubernetes.io/projected/48d3f16f-387d-4fd0-8e35-3512f30a5f63-kube-api-access-njnm2\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.337619 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d3f16f-387d-4fd0-8e35-3512f30a5f63-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.365266 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.365733 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-metadata" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.365751 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-metadata" Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.365786 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-api" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.365794 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-api" Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.365813 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-log" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.365820 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-log" Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.365830 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-log" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.365837 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-log" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.366010 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-metadata" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.366024 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-log" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.366032 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" containerName="nova-metadata-log" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.366066 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" containerName="nova-api-api" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.367073 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.370994 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.380969 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83115fda_d180_46ed_b9f0_60ad3bfb6707.slice/crio-78c94eafa4fa5138052a96a735e8e89118af7057d1a376bdd2661391bee6d05c\": RecentStats: unable to find data in memory cache]" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.398813 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.406733 4772 scope.go:117] "RemoveContainer" containerID="82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58" Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.407776 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58\": container with ID starting with 82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58 not found: ID does not exist" containerID="82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.407812 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58"} err="failed to get container status \"82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58\": rpc error: code = NotFound desc = could not find container \"82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58\": container with ID starting with 82e6c89ff602781316419a844ba22064501e1d2072843c58b96d945dcd2c3e58 not found: ID does not exist" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.407834 4772 scope.go:117] "RemoveContainer" containerID="7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19" Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.415031 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19\": container with ID starting with 7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19 not found: ID does not exist" containerID="7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.415154 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19"} err="failed to get container status \"7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19\": rpc error: code = NotFound desc = could not find container \"7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19\": container with ID starting with 7e5d5caddb270fb47b5513c032978a58b5504de9fa7da3749634b2d16148ae19 not found: ID does not exist" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.415183 4772 scope.go:117] "RemoveContainer" containerID="684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.440368 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.441745 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-config-data\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.441787 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.441809 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa54ea86-921f-4d80-86ac-312754960a02-logs\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.441900 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcz2\" (UniqueName: \"kubernetes.io/projected/fa54ea86-921f-4d80-86ac-312754960a02-kube-api-access-plcz2\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.543622 4772 scope.go:117] "RemoveContainer" containerID="87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.559082 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plcz2\" (UniqueName: \"kubernetes.io/projected/fa54ea86-921f-4d80-86ac-312754960a02-kube-api-access-plcz2\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.559298 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-config-data\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.559329 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.559353 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa54ea86-921f-4d80-86ac-312754960a02-logs\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.560011 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa54ea86-921f-4d80-86ac-312754960a02-logs\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.596582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-config-data\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.597792 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plcz2\" (UniqueName: \"kubernetes.io/projected/fa54ea86-921f-4d80-86ac-312754960a02-kube-api-access-plcz2\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.599521 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.649743 4772 scope.go:117] "RemoveContainer" containerID="684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.649824 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.656211 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84\": container with ID starting with 684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84 not found: ID does not exist" containerID="684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.656262 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84"} err="failed to get container status \"684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84\": rpc error: code = NotFound desc = could not find container \"684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84\": container with ID starting with 684450095b0d0da710639dc42db159d91a0058f330d8a821df53a53a00ddad84 not found: ID does not exist" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.656290 4772 scope.go:117] "RemoveContainer" containerID="87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9" Nov 22 12:13:26 crc kubenswrapper[4772]: E1122 12:13:26.656953 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9\": container with ID starting with 87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9 not found: ID does not exist" containerID="87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.656992 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9"} err="failed to get container status \"87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9\": rpc error: code = NotFound desc = could not find container \"87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9\": container with ID starting with 87b0fc08a60a1f07de880eaa6a20ebe659591cad3d710450af8e4ce4bf0cc8f9 not found: ID does not exist" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.669503 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.706573 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.708283 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.712029 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.719089 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.728732 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.762258 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.762475 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d014cf6c-d393-49b4-9fad-fd21919ea793-logs\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.762623 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/d014cf6c-d393-49b4-9fad-fd21919ea793-kube-api-access-7d5fq\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.762646 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-config-data\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.863492 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d014cf6c-d393-49b4-9fad-fd21919ea793-logs\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.864081 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/d014cf6c-d393-49b4-9fad-fd21919ea793-kube-api-access-7d5fq\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.864158 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-config-data\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.864247 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.865851 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d014cf6c-d393-49b4-9fad-fd21919ea793-logs\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.873846 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.902571 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-config-data\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:26 crc kubenswrapper[4772]: I1122 12:13:26.908839 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/d014cf6c-d393-49b4-9fad-fd21919ea793-kube-api-access-7d5fq\") pod \"nova-api-0\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " pod="openstack/nova-api-0" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.052557 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.235325 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b309be3-8476-41bd-a801-d93e986b8f8e","Type":"ContainerStarted","Data":"41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3"} Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.236466 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.236483 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b309be3-8476-41bd-a801-d93e986b8f8e","Type":"ContainerStarted","Data":"9a04e8ee60144497bdb4f21fb1566597c9ec19b14c97ac855f30b5917af320f0"} Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.250126 4772 generic.go:334] "Generic (PLEG): container finished" podID="0979b32f-aa79-489a-b5ec-ccfa90e847af" containerID="4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f" exitCode=0 Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.250188 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0979b32f-aa79-489a-b5ec-ccfa90e847af","Type":"ContainerDied","Data":"4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f"} Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.262734 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.262715121 podStartE2EDuration="2.262715121s" podCreationTimestamp="2025-11-22 12:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:27.260604419 +0000 UTC m=+5727.500048913" watchObservedRunningTime="2025-11-22 12:13:27.262715121 +0000 UTC m=+5727.502159615" Nov 22 12:13:27 crc kubenswrapper[4772]: W1122 12:13:27.273292 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa54ea86_921f_4d80_86ac_312754960a02.slice/crio-66101db6b7aae609b49789256c1de2481a9a4ff7dc2a51c2dc492b289aa1c732 WatchSource:0}: Error finding container 66101db6b7aae609b49789256c1de2481a9a4ff7dc2a51c2dc492b289aa1c732: Status 404 returned error can't find the container with id 66101db6b7aae609b49789256c1de2481a9a4ff7dc2a51c2dc492b289aa1c732 Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.292836 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.403829 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.434650 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d3f16f-387d-4fd0-8e35-3512f30a5f63" path="/var/lib/kubelet/pods/48d3f16f-387d-4fd0-8e35-3512f30a5f63/volumes" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.441039 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83115fda-d180-46ed-b9f0-60ad3bfb6707" path="/var/lib/kubelet/pods/83115fda-d180-46ed-b9f0-60ad3bfb6707/volumes" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.578590 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6c5m\" (UniqueName: \"kubernetes.io/projected/0979b32f-aa79-489a-b5ec-ccfa90e847af-kube-api-access-c6c5m\") pod \"0979b32f-aa79-489a-b5ec-ccfa90e847af\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.578916 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-config-data\") pod \"0979b32f-aa79-489a-b5ec-ccfa90e847af\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.578985 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-combined-ca-bundle\") pod \"0979b32f-aa79-489a-b5ec-ccfa90e847af\" (UID: \"0979b32f-aa79-489a-b5ec-ccfa90e847af\") " Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.585664 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0979b32f-aa79-489a-b5ec-ccfa90e847af-kube-api-access-c6c5m" (OuterVolumeSpecName: "kube-api-access-c6c5m") pod "0979b32f-aa79-489a-b5ec-ccfa90e847af" (UID: "0979b32f-aa79-489a-b5ec-ccfa90e847af"). InnerVolumeSpecName "kube-api-access-c6c5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.616715 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.623740 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0979b32f-aa79-489a-b5ec-ccfa90e847af" (UID: "0979b32f-aa79-489a-b5ec-ccfa90e847af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:27 crc kubenswrapper[4772]: W1122 12:13:27.627572 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd014cf6c_d393_49b4_9fad_fd21919ea793.slice/crio-d13bda513fc110c4ee7632cb601e0b3416af516e36ef7f2c1dddd598f3e42723 WatchSource:0}: Error finding container d13bda513fc110c4ee7632cb601e0b3416af516e36ef7f2c1dddd598f3e42723: Status 404 returned error can't find the container with id d13bda513fc110c4ee7632cb601e0b3416af516e36ef7f2c1dddd598f3e42723 Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.644317 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-config-data" (OuterVolumeSpecName: "config-data") pod "0979b32f-aa79-489a-b5ec-ccfa90e847af" (UID: "0979b32f-aa79-489a-b5ec-ccfa90e847af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.685300 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6c5m\" (UniqueName: \"kubernetes.io/projected/0979b32f-aa79-489a-b5ec-ccfa90e847af-kube-api-access-c6c5m\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.685331 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:27 crc kubenswrapper[4772]: I1122 12:13:27.685340 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0979b32f-aa79-489a-b5ec-ccfa90e847af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.144606 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6h9r"] Nov 22 12:13:28 crc kubenswrapper[4772]: E1122 12:13:28.145524 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0979b32f-aa79-489a-b5ec-ccfa90e847af" containerName="nova-scheduler-scheduler" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.145552 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0979b32f-aa79-489a-b5ec-ccfa90e847af" containerName="nova-scheduler-scheduler" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.145820 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0979b32f-aa79-489a-b5ec-ccfa90e847af" containerName="nova-scheduler-scheduler" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.147474 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.157989 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6h9r"] Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.278560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d014cf6c-d393-49b4-9fad-fd21919ea793","Type":"ContainerStarted","Data":"603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4"} Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.278613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d014cf6c-d393-49b4-9fad-fd21919ea793","Type":"ContainerStarted","Data":"21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7"} Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.278624 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d014cf6c-d393-49b4-9fad-fd21919ea793","Type":"ContainerStarted","Data":"d13bda513fc110c4ee7632cb601e0b3416af516e36ef7f2c1dddd598f3e42723"} Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.282578 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.283022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0979b32f-aa79-489a-b5ec-ccfa90e847af","Type":"ContainerDied","Data":"747484e8399c2ce69195cce2c8d685e84d6b0fd44f0662571a84232a8541326d"} Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.283096 4772 scope.go:117] "RemoveContainer" containerID="4541144229aa2f85741423a8ae44744ac28064cb41933883dad1026122c0e62f" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.291586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa54ea86-921f-4d80-86ac-312754960a02","Type":"ContainerStarted","Data":"0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3"} Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.291679 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa54ea86-921f-4d80-86ac-312754960a02","Type":"ContainerStarted","Data":"783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec"} Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.291693 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa54ea86-921f-4d80-86ac-312754960a02","Type":"ContainerStarted","Data":"66101db6b7aae609b49789256c1de2481a9a4ff7dc2a51c2dc492b289aa1c732"} Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.305034 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.304992641 podStartE2EDuration="2.304992641s" podCreationTimestamp="2025-11-22 12:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:28.304660703 +0000 UTC m=+5728.544105197" watchObservedRunningTime="2025-11-22 12:13:28.304992641 +0000 UTC m=+5728.544437135" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.310239 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-utilities\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.310431 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-catalog-content\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.310527 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntqjl\" (UniqueName: \"kubernetes.io/projected/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-kube-api-access-ntqjl\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: E1122 12:13:28.312679 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 12:13:28 crc kubenswrapper[4772]: E1122 12:13:28.320085 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 12:13:28 crc kubenswrapper[4772]: E1122 12:13:28.322279 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 12:13:28 crc kubenswrapper[4772]: E1122 12:13:28.322428 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="80e0105c-1be9-4fa6-b46f-263e87770142" containerName="nova-cell1-conductor-conductor" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.337369 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.337337597 podStartE2EDuration="2.337337597s" podCreationTimestamp="2025-11-22 12:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:28.323702437 +0000 UTC m=+5728.563146941" watchObservedRunningTime="2025-11-22 12:13:28.337337597 +0000 UTC m=+5728.576782101" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.375973 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.385558 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.394806 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.397200 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.403688 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.403851 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.417234 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-utilities\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.417490 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-catalog-content\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.417557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntqjl\" (UniqueName: \"kubernetes.io/projected/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-kube-api-access-ntqjl\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.419519 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-utilities\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.424479 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-catalog-content\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.441519 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntqjl\" (UniqueName: \"kubernetes.io/projected/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-kube-api-access-ntqjl\") pod \"certified-operators-c6h9r\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.488770 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.519701 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.519769 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm8vk\" (UniqueName: \"kubernetes.io/projected/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-kube-api-access-fm8vk\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.519831 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-config-data\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.621388 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.621819 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm8vk\" (UniqueName: \"kubernetes.io/projected/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-kube-api-access-fm8vk\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.621859 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-config-data\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.625503 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-config-data\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.625761 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.667806 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm8vk\" (UniqueName: \"kubernetes.io/projected/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-kube-api-access-fm8vk\") pod \"nova-scheduler-0\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " pod="openstack/nova-scheduler-0" Nov 22 12:13:28 crc kubenswrapper[4772]: I1122 12:13:28.730650 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 12:13:29 crc kubenswrapper[4772]: I1122 12:13:29.080218 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6h9r"] Nov 22 12:13:29 crc kubenswrapper[4772]: W1122 12:13:29.085738 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d4c4a1_73f1_4aac_a535_43b6d524c72e.slice/crio-2b18d4645a9bbeccddc408982546089ece09dd442b3c55ed0303bac52d1618f5 WatchSource:0}: Error finding container 2b18d4645a9bbeccddc408982546089ece09dd442b3c55ed0303bac52d1618f5: Status 404 returned error can't find the container with id 2b18d4645a9bbeccddc408982546089ece09dd442b3c55ed0303bac52d1618f5 Nov 22 12:13:29 crc kubenswrapper[4772]: I1122 12:13:29.314263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerStarted","Data":"b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033"} Nov 22 12:13:29 crc kubenswrapper[4772]: I1122 12:13:29.314654 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerStarted","Data":"2b18d4645a9bbeccddc408982546089ece09dd442b3c55ed0303bac52d1618f5"} Nov 22 12:13:29 crc kubenswrapper[4772]: I1122 12:13:29.314679 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 12:13:29 crc kubenswrapper[4772]: I1122 12:13:29.316725 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:13:29 crc kubenswrapper[4772]: I1122 12:13:29.431883 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0979b32f-aa79-489a-b5ec-ccfa90e847af" path="/var/lib/kubelet/pods/0979b32f-aa79-489a-b5ec-ccfa90e847af/volumes" Nov 22 12:13:29 crc kubenswrapper[4772]: I1122 12:13:29.629717 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:30 crc kubenswrapper[4772]: I1122 12:13:30.327661 4772 generic.go:334] "Generic (PLEG): container finished" podID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerID="b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033" exitCode=0 Nov 22 12:13:30 crc kubenswrapper[4772]: I1122 12:13:30.327861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerDied","Data":"b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033"} Nov 22 12:13:30 crc kubenswrapper[4772]: I1122 12:13:30.329300 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerStarted","Data":"f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917"} Nov 22 12:13:30 crc kubenswrapper[4772]: I1122 12:13:30.331406 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be96cca-27a5-448b-b4bb-b3ef4100f9ac","Type":"ContainerStarted","Data":"e335096989f4023536085730b2683fc6bc5dd959f5c3558ed49222e29ac0b0f8"} Nov 22 12:13:30 crc kubenswrapper[4772]: I1122 12:13:30.331458 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be96cca-27a5-448b-b4bb-b3ef4100f9ac","Type":"ContainerStarted","Data":"74015930318d18fb1574d1e3bd8658dd083b159928c6f3e6123f6041e64ba7d2"} Nov 22 12:13:30 crc kubenswrapper[4772]: I1122 12:13:30.376896 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.376874817 podStartE2EDuration="2.376874817s" podCreationTimestamp="2025-11-22 12:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:30.366778356 +0000 UTC m=+5730.606222850" watchObservedRunningTime="2025-11-22 12:13:30.376874817 +0000 UTC m=+5730.616319311" Nov 22 12:13:31 crc kubenswrapper[4772]: I1122 12:13:31.343142 4772 generic.go:334] "Generic (PLEG): container finished" podID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerID="f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917" exitCode=0 Nov 22 12:13:31 crc kubenswrapper[4772]: I1122 12:13:31.343229 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerDied","Data":"f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917"} Nov 22 12:13:31 crc kubenswrapper[4772]: I1122 12:13:31.532860 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:13:31 crc kubenswrapper[4772]: I1122 12:13:31.532933 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:13:31 crc kubenswrapper[4772]: I1122 12:13:31.730039 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:13:31 crc kubenswrapper[4772]: I1122 12:13:31.730891 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.357275 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerStarted","Data":"d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9"} Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.361751 4772 generic.go:334] "Generic (PLEG): container finished" podID="80e0105c-1be9-4fa6-b46f-263e87770142" containerID="1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f" exitCode=0 Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.361869 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80e0105c-1be9-4fa6-b46f-263e87770142","Type":"ContainerDied","Data":"1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f"} Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.600484 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.614782 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6h9r" podStartSLOduration=2.171170566 podStartE2EDuration="4.614756106s" podCreationTimestamp="2025-11-22 12:13:28 +0000 UTC" firstStartedPulling="2025-11-22 12:13:29.316398783 +0000 UTC m=+5729.555843277" lastFinishedPulling="2025-11-22 12:13:31.759984313 +0000 UTC m=+5731.999428817" observedRunningTime="2025-11-22 12:13:32.403106036 +0000 UTC m=+5732.642550530" watchObservedRunningTime="2025-11-22 12:13:32.614756106 +0000 UTC m=+5732.854200600" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.734115 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-combined-ca-bundle\") pod \"80e0105c-1be9-4fa6-b46f-263e87770142\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.734190 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfg67\" (UniqueName: \"kubernetes.io/projected/80e0105c-1be9-4fa6-b46f-263e87770142-kube-api-access-nfg67\") pod \"80e0105c-1be9-4fa6-b46f-263e87770142\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.734248 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-config-data\") pod \"80e0105c-1be9-4fa6-b46f-263e87770142\" (UID: \"80e0105c-1be9-4fa6-b46f-263e87770142\") " Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.753115 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e0105c-1be9-4fa6-b46f-263e87770142-kube-api-access-nfg67" (OuterVolumeSpecName: "kube-api-access-nfg67") pod "80e0105c-1be9-4fa6-b46f-263e87770142" (UID: "80e0105c-1be9-4fa6-b46f-263e87770142"). InnerVolumeSpecName "kube-api-access-nfg67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.765604 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-config-data" (OuterVolumeSpecName: "config-data") pod "80e0105c-1be9-4fa6-b46f-263e87770142" (UID: "80e0105c-1be9-4fa6-b46f-263e87770142"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.770247 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80e0105c-1be9-4fa6-b46f-263e87770142" (UID: "80e0105c-1be9-4fa6-b46f-263e87770142"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.836559 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.836600 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfg67\" (UniqueName: \"kubernetes.io/projected/80e0105c-1be9-4fa6-b46f-263e87770142-kube-api-access-nfg67\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:32 crc kubenswrapper[4772]: I1122 12:13:32.836615 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80e0105c-1be9-4fa6-b46f-263e87770142-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.372025 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.372012 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80e0105c-1be9-4fa6-b46f-263e87770142","Type":"ContainerDied","Data":"b060a4a9240d550e1275c5d613ffe77d8d6e63aa1f4772006fe817acf2f8709b"} Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.372105 4772 scope.go:117] "RemoveContainer" containerID="1985291c7023a7e51e83b9e00fc9bcfc4ebc7eafe4fdf0876f8ce406446a6f6f" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.409029 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.428149 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.448514 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:13:33 crc kubenswrapper[4772]: E1122 12:13:33.448912 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e0105c-1be9-4fa6-b46f-263e87770142" containerName="nova-cell1-conductor-conductor" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.448937 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e0105c-1be9-4fa6-b46f-263e87770142" containerName="nova-cell1-conductor-conductor" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.449254 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e0105c-1be9-4fa6-b46f-263e87770142" containerName="nova-cell1-conductor-conductor" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.449960 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.453628 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.463168 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.549098 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.549683 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.550287 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844q9\" (UniqueName: \"kubernetes.io/projected/b6e24976-7404-46e3-8cbf-e71d378c8bff-kube-api-access-844q9\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.651719 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.653079 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-844q9\" (UniqueName: \"kubernetes.io/projected/b6e24976-7404-46e3-8cbf-e71d378c8bff-kube-api-access-844q9\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.653174 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.658018 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.669679 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.683327 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-844q9\" (UniqueName: \"kubernetes.io/projected/b6e24976-7404-46e3-8cbf-e71d378c8bff-kube-api-access-844q9\") pod \"nova-cell1-conductor-0\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.731423 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 12:13:33 crc kubenswrapper[4772]: I1122 12:13:33.809247 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:34 crc kubenswrapper[4772]: I1122 12:13:34.279846 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 12:13:34 crc kubenswrapper[4772]: W1122 12:13:34.301352 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6e24976_7404_46e3_8cbf_e71d378c8bff.slice/crio-1fe99e24482c9e657c1ed2ceb21e2d6fc5ecc17c478618dd9e260d9c9fe5bc61 WatchSource:0}: Error finding container 1fe99e24482c9e657c1ed2ceb21e2d6fc5ecc17c478618dd9e260d9c9fe5bc61: Status 404 returned error can't find the container with id 1fe99e24482c9e657c1ed2ceb21e2d6fc5ecc17c478618dd9e260d9c9fe5bc61 Nov 22 12:13:34 crc kubenswrapper[4772]: I1122 12:13:34.388486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b6e24976-7404-46e3-8cbf-e71d378c8bff","Type":"ContainerStarted","Data":"1fe99e24482c9e657c1ed2ceb21e2d6fc5ecc17c478618dd9e260d9c9fe5bc61"} Nov 22 12:13:34 crc kubenswrapper[4772]: I1122 12:13:34.629201 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:34 crc kubenswrapper[4772]: I1122 12:13:34.641120 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:35 crc kubenswrapper[4772]: I1122 12:13:35.399412 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b6e24976-7404-46e3-8cbf-e71d378c8bff","Type":"ContainerStarted","Data":"65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe"} Nov 22 12:13:35 crc kubenswrapper[4772]: I1122 12:13:35.423782 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.423759324 podStartE2EDuration="2.423759324s" podCreationTimestamp="2025-11-22 12:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:35.416092813 +0000 UTC m=+5735.655537307" watchObservedRunningTime="2025-11-22 12:13:35.423759324 +0000 UTC m=+5735.663203818" Nov 22 12:13:35 crc kubenswrapper[4772]: I1122 12:13:35.429413 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e0105c-1be9-4fa6-b46f-263e87770142" path="/var/lib/kubelet/pods/80e0105c-1be9-4fa6-b46f-263e87770142/volumes" Nov 22 12:13:35 crc kubenswrapper[4772]: I1122 12:13:35.430015 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 12:13:35 crc kubenswrapper[4772]: I1122 12:13:35.692483 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 12:13:36 crc kubenswrapper[4772]: I1122 12:13:36.411586 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:36 crc kubenswrapper[4772]: I1122 12:13:36.732559 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 12:13:36 crc kubenswrapper[4772]: I1122 12:13:36.732612 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 12:13:37 crc kubenswrapper[4772]: I1122 12:13:37.053783 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 12:13:37 crc kubenswrapper[4772]: I1122 12:13:37.053829 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 12:13:37 crc kubenswrapper[4772]: I1122 12:13:37.816243 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.83:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:13:37 crc kubenswrapper[4772]: I1122 12:13:37.816267 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.83:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:13:38 crc kubenswrapper[4772]: I1122 12:13:38.137297 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:13:38 crc kubenswrapper[4772]: I1122 12:13:38.137374 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 12:13:38 crc kubenswrapper[4772]: I1122 12:13:38.489290 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:38 crc kubenswrapper[4772]: I1122 12:13:38.489649 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:38 crc kubenswrapper[4772]: I1122 12:13:38.559683 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:38 crc kubenswrapper[4772]: I1122 12:13:38.730999 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 12:13:38 crc kubenswrapper[4772]: I1122 12:13:38.772805 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 12:13:39 crc kubenswrapper[4772]: I1122 12:13:39.503233 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 12:13:39 crc kubenswrapper[4772]: I1122 12:13:39.536546 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:39 crc kubenswrapper[4772]: I1122 12:13:39.589421 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6h9r"] Nov 22 12:13:41 crc kubenswrapper[4772]: I1122 12:13:41.461538 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6h9r" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="registry-server" containerID="cri-o://d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9" gracePeriod=2 Nov 22 12:13:41 crc kubenswrapper[4772]: I1122 12:13:41.919346 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:41 crc kubenswrapper[4772]: I1122 12:13:41.921565 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 12:13:41 crc kubenswrapper[4772]: I1122 12:13:41.928667 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 12:13:41 crc kubenswrapper[4772]: I1122 12:13:41.947986 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.029713 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.029758 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcwp\" (UniqueName: \"kubernetes.io/projected/68f0a616-d7a6-4cec-bfac-b271ebf50025-kube-api-access-dtcwp\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.029794 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.029904 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.029967 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f0a616-d7a6-4cec-bfac-b271ebf50025-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.030066 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-scripts\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.132964 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.133012 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcwp\" (UniqueName: \"kubernetes.io/projected/68f0a616-d7a6-4cec-bfac-b271ebf50025-kube-api-access-dtcwp\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.133073 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.133148 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.133182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f0a616-d7a6-4cec-bfac-b271ebf50025-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.133252 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-scripts\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.133437 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f0a616-d7a6-4cec-bfac-b271ebf50025-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.141355 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-scripts\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.141538 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.143382 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.146356 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.154026 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcwp\" (UniqueName: \"kubernetes.io/projected/68f0a616-d7a6-4cec-bfac-b271ebf50025-kube-api-access-dtcwp\") pod \"cinder-scheduler-0\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.250517 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.418246 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.481165 4772 generic.go:334] "Generic (PLEG): container finished" podID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerID="d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9" exitCode=0 Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.481206 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerDied","Data":"d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9"} Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.481234 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6h9r" event={"ID":"b0d4c4a1-73f1-4aac-a535-43b6d524c72e","Type":"ContainerDied","Data":"2b18d4645a9bbeccddc408982546089ece09dd442b3c55ed0303bac52d1618f5"} Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.481254 4772 scope.go:117] "RemoveContainer" containerID="d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.481393 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6h9r" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.525422 4772 scope.go:117] "RemoveContainer" containerID="f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.547662 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntqjl\" (UniqueName: \"kubernetes.io/projected/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-kube-api-access-ntqjl\") pod \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.547805 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-catalog-content\") pod \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.547852 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-utilities\") pod \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\" (UID: \"b0d4c4a1-73f1-4aac-a535-43b6d524c72e\") " Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.551188 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-utilities" (OuterVolumeSpecName: "utilities") pod "b0d4c4a1-73f1-4aac-a535-43b6d524c72e" (UID: "b0d4c4a1-73f1-4aac-a535-43b6d524c72e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.555275 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-kube-api-access-ntqjl" (OuterVolumeSpecName: "kube-api-access-ntqjl") pod "b0d4c4a1-73f1-4aac-a535-43b6d524c72e" (UID: "b0d4c4a1-73f1-4aac-a535-43b6d524c72e"). InnerVolumeSpecName "kube-api-access-ntqjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.557287 4772 scope.go:117] "RemoveContainer" containerID="b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.595343 4772 scope.go:117] "RemoveContainer" containerID="d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9" Nov 22 12:13:42 crc kubenswrapper[4772]: E1122 12:13:42.596385 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9\": container with ID starting with d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9 not found: ID does not exist" containerID="d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.596524 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9"} err="failed to get container status \"d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9\": rpc error: code = NotFound desc = could not find container \"d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9\": container with ID starting with d24eb92fe7800d6e56ab1501f049b68be0c8271743be1d48535f8850fc21d7f9 not found: ID does not exist" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.596631 4772 scope.go:117] "RemoveContainer" containerID="f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917" Nov 22 12:13:42 crc kubenswrapper[4772]: E1122 12:13:42.597286 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917\": container with ID starting with f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917 not found: ID does not exist" containerID="f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.597358 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917"} err="failed to get container status \"f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917\": rpc error: code = NotFound desc = could not find container \"f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917\": container with ID starting with f189f972338f60ee9d08e658aa18684a88087b84c615781f26d7665a90b02917 not found: ID does not exist" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.597398 4772 scope.go:117] "RemoveContainer" containerID="b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.597512 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0d4c4a1-73f1-4aac-a535-43b6d524c72e" (UID: "b0d4c4a1-73f1-4aac-a535-43b6d524c72e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:13:42 crc kubenswrapper[4772]: E1122 12:13:42.598674 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033\": container with ID starting with b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033 not found: ID does not exist" containerID="b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.598785 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033"} err="failed to get container status \"b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033\": rpc error: code = NotFound desc = could not find container \"b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033\": container with ID starting with b55466add5cf76b07c59e419280bfd5cb70d29cd90650dfd38d86775e9306033 not found: ID does not exist" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.650182 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.650236 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.650248 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntqjl\" (UniqueName: \"kubernetes.io/projected/b0d4c4a1-73f1-4aac-a535-43b6d524c72e-kube-api-access-ntqjl\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.749477 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:42 crc kubenswrapper[4772]: W1122 12:13:42.755270 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68f0a616_d7a6_4cec_bfac_b271ebf50025.slice/crio-ee42d101d2ea507ba02164c3b91dd8fafef67ce0e3b50ee7eb2cc958bdd25a4b WatchSource:0}: Error finding container ee42d101d2ea507ba02164c3b91dd8fafef67ce0e3b50ee7eb2cc958bdd25a4b: Status 404 returned error can't find the container with id ee42d101d2ea507ba02164c3b91dd8fafef67ce0e3b50ee7eb2cc958bdd25a4b Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.882161 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6h9r"] Nov 22 12:13:42 crc kubenswrapper[4772]: I1122 12:13:42.891182 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6h9r"] Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.309798 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.312235 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api-log" containerID="cri-o://9d6d5774f134c11e38d7ccef20f4a69ec7fdd6645993146cca8997385f87e2aa" gracePeriod=30 Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.312684 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api" containerID="cri-o://210fe5a88ae34302aa488c93e335869582fd4cd8fb45b9f07e1c657433ce2c44" gracePeriod=30 Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.442937 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" path="/var/lib/kubelet/pods/b0d4c4a1-73f1-4aac-a535-43b6d524c72e/volumes" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.501407 4772 generic.go:334] "Generic (PLEG): container finished" podID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerID="9d6d5774f134c11e38d7ccef20f4a69ec7fdd6645993146cca8997385f87e2aa" exitCode=143 Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.501495 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d148b-078c-4ff8-8d8a-12f90fd9c880","Type":"ContainerDied","Data":"9d6d5774f134c11e38d7ccef20f4a69ec7fdd6645993146cca8997385f87e2aa"} Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.507764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"68f0a616-d7a6-4cec-bfac-b271ebf50025","Type":"ContainerStarted","Data":"fabc44890bae019bc93a9b616220deb5ddaf4cab99d125c39040e620c709e6d5"} Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.507803 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"68f0a616-d7a6-4cec-bfac-b271ebf50025","Type":"ContainerStarted","Data":"ee42d101d2ea507ba02164c3b91dd8fafef67ce0e3b50ee7eb2cc958bdd25a4b"} Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.741325 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 22 12:13:43 crc kubenswrapper[4772]: E1122 12:13:43.741882 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="extract-content" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.741909 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="extract-content" Nov 22 12:13:43 crc kubenswrapper[4772]: E1122 12:13:43.741922 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="extract-utilities" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.741932 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="extract-utilities" Nov 22 12:13:43 crc kubenswrapper[4772]: E1122 12:13:43.741954 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="registry-server" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.741963 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="registry-server" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.742647 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d4c4a1-73f1-4aac-a535-43b6d524c72e" containerName="registry-server" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.744433 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.753164 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.794506 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.870723 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883175 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883222 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883450 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883730 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883782 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883841 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f05efc86-3c87-4c7c-8a65-03152b33376c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883942 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.883992 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.884170 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.884382 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.884429 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-run\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.884460 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.884535 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.884601 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtpx9\" (UniqueName: \"kubernetes.io/projected/f05efc86-3c87-4c7c-8a65-03152b33376c-kube-api-access-jtpx9\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.884704 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989093 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989184 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989208 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989244 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f05efc86-3c87-4c7c-8a65-03152b33376c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989270 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989288 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989308 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989355 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989385 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989404 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-run\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989421 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989460 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989492 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtpx9\" (UniqueName: \"kubernetes.io/projected/f05efc86-3c87-4c7c-8a65-03152b33376c-kube-api-access-jtpx9\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989549 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989573 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.989594 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994106 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994354 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-run\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994540 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994627 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994678 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994728 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994733 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994728 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994804 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:43 crc kubenswrapper[4772]: I1122 12:13:43.994835 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f05efc86-3c87-4c7c-8a65-03152b33376c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:43.999027 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.001289 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.002549 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.004589 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05efc86-3c87-4c7c-8a65-03152b33376c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.006292 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f05efc86-3c87-4c7c-8a65-03152b33376c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.016639 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtpx9\" (UniqueName: \"kubernetes.io/projected/f05efc86-3c87-4c7c-8a65-03152b33376c-kube-api-access-jtpx9\") pod \"cinder-volume-volume1-0\" (UID: \"f05efc86-3c87-4c7c-8a65-03152b33376c\") " pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.080528 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.519453 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"68f0a616-d7a6-4cec-bfac-b271ebf50025","Type":"ContainerStarted","Data":"6f0566fa0d4121e3e4bf5bc7663c668a4fc8d47cd6b9c3b99749b88cdbe42f1e"} Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.547072 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.547032534 podStartE2EDuration="3.547032534s" podCreationTimestamp="2025-11-22 12:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:44.543305692 +0000 UTC m=+5744.782750186" watchObservedRunningTime="2025-11-22 12:13:44.547032534 +0000 UTC m=+5744.786477028" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.600781 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.607015 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.610196 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.625593 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703559 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-dev\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-sys\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703683 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-run\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703707 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-config-data\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703742 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703796 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703823 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703845 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703864 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703880 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.703979 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.704007 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-scripts\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.704028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-lib-modules\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.704094 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-ceph\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.704112 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwg9r\" (UniqueName: \"kubernetes.io/projected/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-kube-api-access-bwg9r\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.704138 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.705495 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.805821 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.805881 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.805901 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.805922 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.805998 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806042 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806023 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806025 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806127 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806243 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806274 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-scripts\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806320 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-lib-modules\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806434 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-ceph\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806459 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwg9r\" (UniqueName: \"kubernetes.io/projected/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-kube-api-access-bwg9r\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806512 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806592 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-dev\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806689 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-sys\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806713 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-run\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806758 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-config-data\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806801 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806911 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.806942 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-lib-modules\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.807090 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-dev\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.807133 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-run\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.807132 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-sys\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.807130 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.815599 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-ceph\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.815923 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-config-data\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.819687 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.819763 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.820973 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-scripts\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.827690 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwg9r\" (UniqueName: \"kubernetes.io/projected/784dd71b-f2ce-4ba9-9e18-b5af04ecb90b-kube-api-access-bwg9r\") pod \"cinder-backup-0\" (UID: \"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b\") " pod="openstack/cinder-backup-0" Nov 22 12:13:44 crc kubenswrapper[4772]: I1122 12:13:44.927289 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 22 12:13:45 crc kubenswrapper[4772]: I1122 12:13:45.472355 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 22 12:13:45 crc kubenswrapper[4772]: I1122 12:13:45.540368 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f05efc86-3c87-4c7c-8a65-03152b33376c","Type":"ContainerStarted","Data":"919603fdb9447799cca764782c24ebf4df635bb81fcea2fb5116f5996647b223"} Nov 22 12:13:45 crc kubenswrapper[4772]: I1122 12:13:45.549609 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b","Type":"ContainerStarted","Data":"2fd31d180a1f4b5d64cd019958dfad5cdd7a04d9ea640ddbf885aacb15c62067"} Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.483774 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.80:8776/healthcheck\": read tcp 10.217.0.2:58304->10.217.1.80:8776: read: connection reset by peer" Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.562775 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f05efc86-3c87-4c7c-8a65-03152b33376c","Type":"ContainerStarted","Data":"716767dd6747e97a6ca8b88b9ca1f2014caee0d928d5b7c88529c6baa3372295"} Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.562823 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f05efc86-3c87-4c7c-8a65-03152b33376c","Type":"ContainerStarted","Data":"0e21a2600274ee363b7cb4354b3fcf31a38707e27cb783fe8b2e70bc84f05511"} Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.567220 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b","Type":"ContainerStarted","Data":"1cc431be14c607e24c1126fb5ffd6d38d9db9b2b0cc4ee969604eee4b0b9cef4"} Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.567248 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"784dd71b-f2ce-4ba9-9e18-b5af04ecb90b","Type":"ContainerStarted","Data":"d010f91335559dad074d3e686c4fe7075a09172a033207d7dbabe7a8f529b5ff"} Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.588674 4772 generic.go:334] "Generic (PLEG): container finished" podID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerID="210fe5a88ae34302aa488c93e335869582fd4cd8fb45b9f07e1c657433ce2c44" exitCode=0 Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.588740 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d148b-078c-4ff8-8d8a-12f90fd9c880","Type":"ContainerDied","Data":"210fe5a88ae34302aa488c93e335869582fd4cd8fb45b9f07e1c657433ce2c44"} Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.607185 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.951535973 podStartE2EDuration="3.607159987s" podCreationTimestamp="2025-11-22 12:13:43 +0000 UTC" firstStartedPulling="2025-11-22 12:13:44.713898919 +0000 UTC m=+5744.953343413" lastFinishedPulling="2025-11-22 12:13:45.369522923 +0000 UTC m=+5745.608967427" observedRunningTime="2025-11-22 12:13:46.591669981 +0000 UTC m=+5746.831114525" watchObservedRunningTime="2025-11-22 12:13:46.607159987 +0000 UTC m=+5746.846604481" Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.625629 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.070965897 podStartE2EDuration="2.625611196s" podCreationTimestamp="2025-11-22 12:13:44 +0000 UTC" firstStartedPulling="2025-11-22 12:13:45.490456454 +0000 UTC m=+5745.729900948" lastFinishedPulling="2025-11-22 12:13:46.045101753 +0000 UTC m=+5746.284546247" observedRunningTime="2025-11-22 12:13:46.618240683 +0000 UTC m=+5746.857685177" watchObservedRunningTime="2025-11-22 12:13:46.625611196 +0000 UTC m=+5746.865055690" Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.742690 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.754261 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.772278 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 12:13:46 crc kubenswrapper[4772]: I1122 12:13:46.992618 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.057947 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d148b-078c-4ff8-8d8a-12f90fd9c880-etc-machine-id\") pod \"310d148b-078c-4ff8-8d8a-12f90fd9c880\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058096 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data-custom\") pod \"310d148b-078c-4ff8-8d8a-12f90fd9c880\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058201 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-scripts\") pod \"310d148b-078c-4ff8-8d8a-12f90fd9c880\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058186 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310d148b-078c-4ff8-8d8a-12f90fd9c880-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "310d148b-078c-4ff8-8d8a-12f90fd9c880" (UID: "310d148b-078c-4ff8-8d8a-12f90fd9c880"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058318 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-combined-ca-bundle\") pod \"310d148b-078c-4ff8-8d8a-12f90fd9c880\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058388 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data\") pod \"310d148b-078c-4ff8-8d8a-12f90fd9c880\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058415 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d148b-078c-4ff8-8d8a-12f90fd9c880-logs\") pod \"310d148b-078c-4ff8-8d8a-12f90fd9c880\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058439 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8h2c\" (UniqueName: \"kubernetes.io/projected/310d148b-078c-4ff8-8d8a-12f90fd9c880-kube-api-access-d8h2c\") pod \"310d148b-078c-4ff8-8d8a-12f90fd9c880\" (UID: \"310d148b-078c-4ff8-8d8a-12f90fd9c880\") " Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.058817 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d148b-078c-4ff8-8d8a-12f90fd9c880-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.063001 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.063107 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.063425 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310d148b-078c-4ff8-8d8a-12f90fd9c880-logs" (OuterVolumeSpecName: "logs") pod "310d148b-078c-4ff8-8d8a-12f90fd9c880" (UID: "310d148b-078c-4ff8-8d8a-12f90fd9c880"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.063520 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.063748 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.067133 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310d148b-078c-4ff8-8d8a-12f90fd9c880-kube-api-access-d8h2c" (OuterVolumeSpecName: "kube-api-access-d8h2c") pod "310d148b-078c-4ff8-8d8a-12f90fd9c880" (UID: "310d148b-078c-4ff8-8d8a-12f90fd9c880"). InnerVolumeSpecName "kube-api-access-d8h2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.073243 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "310d148b-078c-4ff8-8d8a-12f90fd9c880" (UID: "310d148b-078c-4ff8-8d8a-12f90fd9c880"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.075521 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.075717 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.078319 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-scripts" (OuterVolumeSpecName: "scripts") pod "310d148b-078c-4ff8-8d8a-12f90fd9c880" (UID: "310d148b-078c-4ff8-8d8a-12f90fd9c880"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.092906 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "310d148b-078c-4ff8-8d8a-12f90fd9c880" (UID: "310d148b-078c-4ff8-8d8a-12f90fd9c880"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.157759 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data" (OuterVolumeSpecName: "config-data") pod "310d148b-078c-4ff8-8d8a-12f90fd9c880" (UID: "310d148b-078c-4ff8-8d8a-12f90fd9c880"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.162181 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.162249 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.162266 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.162278 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d148b-078c-4ff8-8d8a-12f90fd9c880-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.162291 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d148b-078c-4ff8-8d8a-12f90fd9c880-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.162303 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8h2c\" (UniqueName: \"kubernetes.io/projected/310d148b-078c-4ff8-8d8a-12f90fd9c880-kube-api-access-d8h2c\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.251856 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.601892 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d148b-078c-4ff8-8d8a-12f90fd9c880","Type":"ContainerDied","Data":"4eb899776d8a53b672306c8c2497d09c29dd7a57072c788a449a1bcf5a383969"} Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.602765 4772 scope.go:117] "RemoveContainer" containerID="210fe5a88ae34302aa488c93e335869582fd4cd8fb45b9f07e1c657433ce2c44" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.602700 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.610543 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.641510 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.645726 4772 scope.go:117] "RemoveContainer" containerID="9d6d5774f134c11e38d7ccef20f4a69ec7fdd6645993146cca8997385f87e2aa" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.661281 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.683114 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:47 crc kubenswrapper[4772]: E1122 12:13:47.683597 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.683610 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api" Nov 22 12:13:47 crc kubenswrapper[4772]: E1122 12:13:47.683640 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api-log" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.683647 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api-log" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.686527 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.686553 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" containerName="cinder-api-log" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.687618 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.694018 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.696594 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.803094 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-scripts\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.803153 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.803234 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vs4w\" (UniqueName: \"kubernetes.io/projected/ad882bee-d513-428f-b177-0a6412268f7a-kube-api-access-5vs4w\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.803285 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad882bee-d513-428f-b177-0a6412268f7a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.803308 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad882bee-d513-428f-b177-0a6412268f7a-logs\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.803345 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.803371 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-config-data\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905253 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-config-data\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905413 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-scripts\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905431 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905495 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vs4w\" (UniqueName: \"kubernetes.io/projected/ad882bee-d513-428f-b177-0a6412268f7a-kube-api-access-5vs4w\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad882bee-d513-428f-b177-0a6412268f7a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905583 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad882bee-d513-428f-b177-0a6412268f7a-logs\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905746 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad882bee-d513-428f-b177-0a6412268f7a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.905991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad882bee-d513-428f-b177-0a6412268f7a-logs\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.914279 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.915579 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.926917 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-config-data\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.927514 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vs4w\" (UniqueName: \"kubernetes.io/projected/ad882bee-d513-428f-b177-0a6412268f7a-kube-api-access-5vs4w\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:47 crc kubenswrapper[4772]: I1122 12:13:47.927978 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad882bee-d513-428f-b177-0a6412268f7a-scripts\") pod \"cinder-api-0\" (UID: \"ad882bee-d513-428f-b177-0a6412268f7a\") " pod="openstack/cinder-api-0" Nov 22 12:13:48 crc kubenswrapper[4772]: I1122 12:13:48.025902 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 12:13:48 crc kubenswrapper[4772]: I1122 12:13:48.498409 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 12:13:48 crc kubenswrapper[4772]: W1122 12:13:48.502683 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad882bee_d513_428f_b177_0a6412268f7a.slice/crio-c864d2f5567e39cb3551ca5de36ddbb2179d1398c23b504101dee310119908fe WatchSource:0}: Error finding container c864d2f5567e39cb3551ca5de36ddbb2179d1398c23b504101dee310119908fe: Status 404 returned error can't find the container with id c864d2f5567e39cb3551ca5de36ddbb2179d1398c23b504101dee310119908fe Nov 22 12:13:48 crc kubenswrapper[4772]: I1122 12:13:48.618476 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad882bee-d513-428f-b177-0a6412268f7a","Type":"ContainerStarted","Data":"c864d2f5567e39cb3551ca5de36ddbb2179d1398c23b504101dee310119908fe"} Nov 22 12:13:49 crc kubenswrapper[4772]: I1122 12:13:49.080816 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:49 crc kubenswrapper[4772]: I1122 12:13:49.437911 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310d148b-078c-4ff8-8d8a-12f90fd9c880" path="/var/lib/kubelet/pods/310d148b-078c-4ff8-8d8a-12f90fd9c880/volumes" Nov 22 12:13:50 crc kubenswrapper[4772]: I1122 12:13:49.634676 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad882bee-d513-428f-b177-0a6412268f7a","Type":"ContainerStarted","Data":"76618412e6aab745e84e258edee7ff2128e3931dc81e5d1dfca1ae40677061f0"} Nov 22 12:13:50 crc kubenswrapper[4772]: I1122 12:13:49.928163 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 22 12:13:50 crc kubenswrapper[4772]: I1122 12:13:50.647220 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad882bee-d513-428f-b177-0a6412268f7a","Type":"ContainerStarted","Data":"1aaa9f7b3e9efc64389bc565e5f0b0f0e6281a008e72c2e8d33f96c015853f53"} Nov 22 12:13:50 crc kubenswrapper[4772]: I1122 12:13:50.648044 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 12:13:50 crc kubenswrapper[4772]: I1122 12:13:50.672079 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.672034484 podStartE2EDuration="3.672034484s" podCreationTimestamp="2025-11-22 12:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:50.66667262 +0000 UTC m=+5750.906117134" watchObservedRunningTime="2025-11-22 12:13:50.672034484 +0000 UTC m=+5750.911478988" Nov 22 12:13:52 crc kubenswrapper[4772]: I1122 12:13:52.470744 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 12:13:52 crc kubenswrapper[4772]: I1122 12:13:52.540905 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:52 crc kubenswrapper[4772]: I1122 12:13:52.668114 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="cinder-scheduler" containerID="cri-o://fabc44890bae019bc93a9b616220deb5ddaf4cab99d125c39040e620c709e6d5" gracePeriod=30 Nov 22 12:13:52 crc kubenswrapper[4772]: I1122 12:13:52.668300 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="probe" containerID="cri-o://6f0566fa0d4121e3e4bf5bc7663c668a4fc8d47cd6b9c3b99749b88cdbe42f1e" gracePeriod=30 Nov 22 12:13:53 crc kubenswrapper[4772]: I1122 12:13:53.686984 4772 generic.go:334] "Generic (PLEG): container finished" podID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerID="6f0566fa0d4121e3e4bf5bc7663c668a4fc8d47cd6b9c3b99749b88cdbe42f1e" exitCode=0 Nov 22 12:13:53 crc kubenswrapper[4772]: I1122 12:13:53.687094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"68f0a616-d7a6-4cec-bfac-b271ebf50025","Type":"ContainerDied","Data":"6f0566fa0d4121e3e4bf5bc7663c668a4fc8d47cd6b9c3b99749b88cdbe42f1e"} Nov 22 12:13:54 crc kubenswrapper[4772]: I1122 12:13:54.285735 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 22 12:13:54 crc kubenswrapper[4772]: I1122 12:13:54.697333 4772 generic.go:334] "Generic (PLEG): container finished" podID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerID="fabc44890bae019bc93a9b616220deb5ddaf4cab99d125c39040e620c709e6d5" exitCode=0 Nov 22 12:13:54 crc kubenswrapper[4772]: I1122 12:13:54.697392 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"68f0a616-d7a6-4cec-bfac-b271ebf50025","Type":"ContainerDied","Data":"fabc44890bae019bc93a9b616220deb5ddaf4cab99d125c39040e620c709e6d5"} Nov 22 12:13:54 crc kubenswrapper[4772]: I1122 12:13:54.997785 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.071905 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data\") pod \"68f0a616-d7a6-4cec-bfac-b271ebf50025\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.071967 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcwp\" (UniqueName: \"kubernetes.io/projected/68f0a616-d7a6-4cec-bfac-b271ebf50025-kube-api-access-dtcwp\") pod \"68f0a616-d7a6-4cec-bfac-b271ebf50025\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.072083 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-combined-ca-bundle\") pod \"68f0a616-d7a6-4cec-bfac-b271ebf50025\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.072116 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data-custom\") pod \"68f0a616-d7a6-4cec-bfac-b271ebf50025\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.072138 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-scripts\") pod \"68f0a616-d7a6-4cec-bfac-b271ebf50025\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.072260 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f0a616-d7a6-4cec-bfac-b271ebf50025-etc-machine-id\") pod \"68f0a616-d7a6-4cec-bfac-b271ebf50025\" (UID: \"68f0a616-d7a6-4cec-bfac-b271ebf50025\") " Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.072942 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68f0a616-d7a6-4cec-bfac-b271ebf50025-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "68f0a616-d7a6-4cec-bfac-b271ebf50025" (UID: "68f0a616-d7a6-4cec-bfac-b271ebf50025"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.078791 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "68f0a616-d7a6-4cec-bfac-b271ebf50025" (UID: "68f0a616-d7a6-4cec-bfac-b271ebf50025"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.083006 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f0a616-d7a6-4cec-bfac-b271ebf50025-kube-api-access-dtcwp" (OuterVolumeSpecName: "kube-api-access-dtcwp") pod "68f0a616-d7a6-4cec-bfac-b271ebf50025" (UID: "68f0a616-d7a6-4cec-bfac-b271ebf50025"). InnerVolumeSpecName "kube-api-access-dtcwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.092337 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-scripts" (OuterVolumeSpecName: "scripts") pod "68f0a616-d7a6-4cec-bfac-b271ebf50025" (UID: "68f0a616-d7a6-4cec-bfac-b271ebf50025"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.181451 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.181507 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.181521 4772 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f0a616-d7a6-4cec-bfac-b271ebf50025-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.181534 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcwp\" (UniqueName: \"kubernetes.io/projected/68f0a616-d7a6-4cec-bfac-b271ebf50025-kube-api-access-dtcwp\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.230686 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f0a616-d7a6-4cec-bfac-b271ebf50025" (UID: "68f0a616-d7a6-4cec-bfac-b271ebf50025"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.273862 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data" (OuterVolumeSpecName: "config-data") pod "68f0a616-d7a6-4cec-bfac-b271ebf50025" (UID: "68f0a616-d7a6-4cec-bfac-b271ebf50025"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.283374 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.283418 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f0a616-d7a6-4cec-bfac-b271ebf50025-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.329668 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.710164 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"68f0a616-d7a6-4cec-bfac-b271ebf50025","Type":"ContainerDied","Data":"ee42d101d2ea507ba02164c3b91dd8fafef67ce0e3b50ee7eb2cc958bdd25a4b"} Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.710276 4772 scope.go:117] "RemoveContainer" containerID="6f0566fa0d4121e3e4bf5bc7663c668a4fc8d47cd6b9c3b99749b88cdbe42f1e" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.710428 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.744363 4772 scope.go:117] "RemoveContainer" containerID="fabc44890bae019bc93a9b616220deb5ddaf4cab99d125c39040e620c709e6d5" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.750981 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.760444 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.781079 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:55 crc kubenswrapper[4772]: E1122 12:13:55.781823 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="cinder-scheduler" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.781901 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="cinder-scheduler" Nov 22 12:13:55 crc kubenswrapper[4772]: E1122 12:13:55.782003 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="probe" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.782078 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="probe" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.782352 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="probe" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.782431 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" containerName="cinder-scheduler" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.783590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.785794 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.790254 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.903529 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.903631 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-scripts\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.903703 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbv96\" (UniqueName: \"kubernetes.io/projected/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-kube-api-access-pbv96\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.904137 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-config-data\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.904292 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:55 crc kubenswrapper[4772]: I1122 12:13:55.904381 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.006565 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.006629 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.006722 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.006741 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-scripts\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.006761 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbv96\" (UniqueName: \"kubernetes.io/projected/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-kube-api-access-pbv96\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.006803 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.006848 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-config-data\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.011402 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.011466 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-config-data\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.013659 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-scripts\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.014304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.028657 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbv96\" (UniqueName: \"kubernetes.io/projected/024e30d7-cf0f-4a2d-a815-ecc22c0a9769-kube-api-access-pbv96\") pod \"cinder-scheduler-0\" (UID: \"024e30d7-cf0f-4a2d-a815-ecc22c0a9769\") " pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.145549 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.611525 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 12:13:56 crc kubenswrapper[4772]: I1122 12:13:56.767257 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"024e30d7-cf0f-4a2d-a815-ecc22c0a9769","Type":"ContainerStarted","Data":"dd74012be67725291aa75d1c2caec7e95c039efe67dc8f59c1ff5b99b9216ed7"} Nov 22 12:13:57 crc kubenswrapper[4772]: I1122 12:13:57.429520 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f0a616-d7a6-4cec-bfac-b271ebf50025" path="/var/lib/kubelet/pods/68f0a616-d7a6-4cec-bfac-b271ebf50025/volumes" Nov 22 12:13:57 crc kubenswrapper[4772]: I1122 12:13:57.779469 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"024e30d7-cf0f-4a2d-a815-ecc22c0a9769","Type":"ContainerStarted","Data":"1f6d1ed6cd8ccb6e7ee0cd99cea5e4fed3a887aaf1ae9088f4993610d887eaa4"} Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.471142 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-scp94"] Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.474133 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.481299 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-scp94"] Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.575605 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f6sc\" (UniqueName: \"kubernetes.io/projected/4add6394-1ece-4cde-bf6e-5294c5407707-kube-api-access-7f6sc\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.575695 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-catalog-content\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.575771 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-utilities\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.678449 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-utilities\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.678612 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f6sc\" (UniqueName: \"kubernetes.io/projected/4add6394-1ece-4cde-bf6e-5294c5407707-kube-api-access-7f6sc\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.678696 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-catalog-content\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.679191 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-utilities\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.679321 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-catalog-content\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.700996 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f6sc\" (UniqueName: \"kubernetes.io/projected/4add6394-1ece-4cde-bf6e-5294c5407707-kube-api-access-7f6sc\") pod \"redhat-marketplace-scp94\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.796543 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"024e30d7-cf0f-4a2d-a815-ecc22c0a9769","Type":"ContainerStarted","Data":"213fb6748ed709baf519cc56242e3007b903d429a90a6315a733e7420c651383"} Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.805895 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:13:58 crc kubenswrapper[4772]: I1122 12:13:58.820570 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.8205483449999997 podStartE2EDuration="3.820548345s" podCreationTimestamp="2025-11-22 12:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:13:58.817652333 +0000 UTC m=+5759.057096837" watchObservedRunningTime="2025-11-22 12:13:58.820548345 +0000 UTC m=+5759.059992849" Nov 22 12:13:59 crc kubenswrapper[4772]: I1122 12:13:59.296603 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-scp94"] Nov 22 12:13:59 crc kubenswrapper[4772]: W1122 12:13:59.300546 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4add6394_1ece_4cde_bf6e_5294c5407707.slice/crio-0f1e7baf5e2bcd1c77be7d308dd5642365f33146ca7529c4016108d5084121c8 WatchSource:0}: Error finding container 0f1e7baf5e2bcd1c77be7d308dd5642365f33146ca7529c4016108d5084121c8: Status 404 returned error can't find the container with id 0f1e7baf5e2bcd1c77be7d308dd5642365f33146ca7529c4016108d5084121c8 Nov 22 12:13:59 crc kubenswrapper[4772]: I1122 12:13:59.808525 4772 generic.go:334] "Generic (PLEG): container finished" podID="4add6394-1ece-4cde-bf6e-5294c5407707" containerID="fb498ea61eafb2994f85c38fccd43ab00c135148adb5092ce98b5982a899c8b7" exitCode=0 Nov 22 12:13:59 crc kubenswrapper[4772]: I1122 12:13:59.808614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scp94" event={"ID":"4add6394-1ece-4cde-bf6e-5294c5407707","Type":"ContainerDied","Data":"fb498ea61eafb2994f85c38fccd43ab00c135148adb5092ce98b5982a899c8b7"} Nov 22 12:13:59 crc kubenswrapper[4772]: I1122 12:13:59.809107 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scp94" event={"ID":"4add6394-1ece-4cde-bf6e-5294c5407707","Type":"ContainerStarted","Data":"0f1e7baf5e2bcd1c77be7d308dd5642365f33146ca7529c4016108d5084121c8"} Nov 22 12:14:00 crc kubenswrapper[4772]: I1122 12:14:00.137083 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 12:14:01 crc kubenswrapper[4772]: I1122 12:14:01.146432 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 12:14:01 crc kubenswrapper[4772]: I1122 12:14:01.533300 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:14:01 crc kubenswrapper[4772]: I1122 12:14:01.533965 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:14:01 crc kubenswrapper[4772]: I1122 12:14:01.841128 4772 generic.go:334] "Generic (PLEG): container finished" podID="4add6394-1ece-4cde-bf6e-5294c5407707" containerID="cab8527ed4e3c4ec4e09fdfaa763f3844599c14807ba8b5f74597ea747ced8e1" exitCode=0 Nov 22 12:14:01 crc kubenswrapper[4772]: I1122 12:14:01.841215 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scp94" event={"ID":"4add6394-1ece-4cde-bf6e-5294c5407707","Type":"ContainerDied","Data":"cab8527ed4e3c4ec4e09fdfaa763f3844599c14807ba8b5f74597ea747ced8e1"} Nov 22 12:14:02 crc kubenswrapper[4772]: I1122 12:14:02.854475 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scp94" event={"ID":"4add6394-1ece-4cde-bf6e-5294c5407707","Type":"ContainerStarted","Data":"a822e9f5111c7ca72a58285d2ee1f792b120694829a89bc0aa21e68d3a262aff"} Nov 22 12:14:02 crc kubenswrapper[4772]: I1122 12:14:02.873550 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-scp94" podStartSLOduration=2.404208716 podStartE2EDuration="4.873521925s" podCreationTimestamp="2025-11-22 12:13:58 +0000 UTC" firstStartedPulling="2025-11-22 12:13:59.814302828 +0000 UTC m=+5760.053747322" lastFinishedPulling="2025-11-22 12:14:02.283615987 +0000 UTC m=+5762.523060531" observedRunningTime="2025-11-22 12:14:02.869114575 +0000 UTC m=+5763.108559099" watchObservedRunningTime="2025-11-22 12:14:02.873521925 +0000 UTC m=+5763.112966459" Nov 22 12:14:06 crc kubenswrapper[4772]: I1122 12:14:06.430963 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 12:14:08 crc kubenswrapper[4772]: I1122 12:14:08.806537 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:14:08 crc kubenswrapper[4772]: I1122 12:14:08.809463 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:14:08 crc kubenswrapper[4772]: I1122 12:14:08.899196 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:14:09 crc kubenswrapper[4772]: I1122 12:14:09.016467 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:14:09 crc kubenswrapper[4772]: I1122 12:14:09.144693 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-scp94"] Nov 22 12:14:10 crc kubenswrapper[4772]: I1122 12:14:10.952570 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-scp94" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="registry-server" containerID="cri-o://a822e9f5111c7ca72a58285d2ee1f792b120694829a89bc0aa21e68d3a262aff" gracePeriod=2 Nov 22 12:14:11 crc kubenswrapper[4772]: I1122 12:14:11.969820 4772 generic.go:334] "Generic (PLEG): container finished" podID="4add6394-1ece-4cde-bf6e-5294c5407707" containerID="a822e9f5111c7ca72a58285d2ee1f792b120694829a89bc0aa21e68d3a262aff" exitCode=0 Nov 22 12:14:11 crc kubenswrapper[4772]: I1122 12:14:11.970037 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scp94" event={"ID":"4add6394-1ece-4cde-bf6e-5294c5407707","Type":"ContainerDied","Data":"a822e9f5111c7ca72a58285d2ee1f792b120694829a89bc0aa21e68d3a262aff"} Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.046848 4772 scope.go:117] "RemoveContainer" containerID="51042e107a74c5b9bccf1bbd42e8e4096185c36baa2b6e97c476594f4763b28f" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.141017 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.145521 4772 scope.go:117] "RemoveContainer" containerID="9649c59dc869756da9516aacf3ca8721d2bfaf2dd798c8d8e6838fdbcf60aef2" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.218661 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f6sc\" (UniqueName: \"kubernetes.io/projected/4add6394-1ece-4cde-bf6e-5294c5407707-kube-api-access-7f6sc\") pod \"4add6394-1ece-4cde-bf6e-5294c5407707\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.218794 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-utilities\") pod \"4add6394-1ece-4cde-bf6e-5294c5407707\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.218879 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-catalog-content\") pod \"4add6394-1ece-4cde-bf6e-5294c5407707\" (UID: \"4add6394-1ece-4cde-bf6e-5294c5407707\") " Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.219937 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-utilities" (OuterVolumeSpecName: "utilities") pod "4add6394-1ece-4cde-bf6e-5294c5407707" (UID: "4add6394-1ece-4cde-bf6e-5294c5407707"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.225171 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4add6394-1ece-4cde-bf6e-5294c5407707-kube-api-access-7f6sc" (OuterVolumeSpecName: "kube-api-access-7f6sc") pod "4add6394-1ece-4cde-bf6e-5294c5407707" (UID: "4add6394-1ece-4cde-bf6e-5294c5407707"). InnerVolumeSpecName "kube-api-access-7f6sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.321335 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f6sc\" (UniqueName: \"kubernetes.io/projected/4add6394-1ece-4cde-bf6e-5294c5407707-kube-api-access-7f6sc\") on node \"crc\" DevicePath \"\"" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.321398 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.454754 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4add6394-1ece-4cde-bf6e-5294c5407707" (UID: "4add6394-1ece-4cde-bf6e-5294c5407707"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.550566 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4add6394-1ece-4cde-bf6e-5294c5407707-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.984578 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scp94" event={"ID":"4add6394-1ece-4cde-bf6e-5294c5407707","Type":"ContainerDied","Data":"0f1e7baf5e2bcd1c77be7d308dd5642365f33146ca7529c4016108d5084121c8"} Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.984637 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scp94" Nov 22 12:14:12 crc kubenswrapper[4772]: I1122 12:14:12.986272 4772 scope.go:117] "RemoveContainer" containerID="a822e9f5111c7ca72a58285d2ee1f792b120694829a89bc0aa21e68d3a262aff" Nov 22 12:14:13 crc kubenswrapper[4772]: I1122 12:14:13.021372 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-scp94"] Nov 22 12:14:13 crc kubenswrapper[4772]: I1122 12:14:13.029294 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-scp94"] Nov 22 12:14:13 crc kubenswrapper[4772]: I1122 12:14:13.030478 4772 scope.go:117] "RemoveContainer" containerID="cab8527ed4e3c4ec4e09fdfaa763f3844599c14807ba8b5f74597ea747ced8e1" Nov 22 12:14:13 crc kubenswrapper[4772]: I1122 12:14:13.062672 4772 scope.go:117] "RemoveContainer" containerID="fb498ea61eafb2994f85c38fccd43ab00c135148adb5092ce98b5982a899c8b7" Nov 22 12:14:13 crc kubenswrapper[4772]: I1122 12:14:13.429597 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" path="/var/lib/kubelet/pods/4add6394-1ece-4cde-bf6e-5294c5407707/volumes" Nov 22 12:14:31 crc kubenswrapper[4772]: I1122 12:14:31.532982 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:14:31 crc kubenswrapper[4772]: I1122 12:14:31.534213 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:14:31 crc kubenswrapper[4772]: I1122 12:14:31.534305 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:14:31 crc kubenswrapper[4772]: I1122 12:14:31.536110 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:14:31 crc kubenswrapper[4772]: I1122 12:14:31.536253 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" gracePeriod=600 Nov 22 12:14:31 crc kubenswrapper[4772]: E1122 12:14:31.691501 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:14:32 crc kubenswrapper[4772]: I1122 12:14:32.220853 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" exitCode=0 Nov 22 12:14:32 crc kubenswrapper[4772]: I1122 12:14:32.220927 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080"} Nov 22 12:14:32 crc kubenswrapper[4772]: I1122 12:14:32.220996 4772 scope.go:117] "RemoveContainer" containerID="a0559f4649dac5095b0b01f40c57f009ee300eceb2af79f78144e4e5d01c3049" Nov 22 12:14:32 crc kubenswrapper[4772]: I1122 12:14:32.221941 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:14:32 crc kubenswrapper[4772]: E1122 12:14:32.222493 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:14:34 crc kubenswrapper[4772]: I1122 12:14:34.083271 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-s96cr"] Nov 22 12:14:34 crc kubenswrapper[4772]: I1122 12:14:34.099220 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-s96cr"] Nov 22 12:14:35 crc kubenswrapper[4772]: I1122 12:14:35.427758 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11249242-2e3b-451e-bc6b-a23827e5daf0" path="/var/lib/kubelet/pods/11249242-2e3b-451e-bc6b-a23827e5daf0/volumes" Nov 22 12:14:44 crc kubenswrapper[4772]: I1122 12:14:44.047445 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d6b6-account-create-nj5xv"] Nov 22 12:14:44 crc kubenswrapper[4772]: I1122 12:14:44.061809 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d6b6-account-create-nj5xv"] Nov 22 12:14:45 crc kubenswrapper[4772]: I1122 12:14:45.427688 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0450b34-6387-427c-8fc6-51dd089d6da1" path="/var/lib/kubelet/pods/d0450b34-6387-427c-8fc6-51dd089d6da1/volumes" Nov 22 12:14:47 crc kubenswrapper[4772]: I1122 12:14:47.415736 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:14:47 crc kubenswrapper[4772]: E1122 12:14:47.416380 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:14:51 crc kubenswrapper[4772]: I1122 12:14:51.042416 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ntzc2"] Nov 22 12:14:51 crc kubenswrapper[4772]: I1122 12:14:51.059210 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ntzc2"] Nov 22 12:14:51 crc kubenswrapper[4772]: I1122 12:14:51.438837 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c080996a-73b3-4f39-a9d8-5912f7d73ab1" path="/var/lib/kubelet/pods/c080996a-73b3-4f39-a9d8-5912f7d73ab1/volumes" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.177242 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9"] Nov 22 12:15:00 crc kubenswrapper[4772]: E1122 12:15:00.186656 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="extract-content" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.186712 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="extract-content" Nov 22 12:15:00 crc kubenswrapper[4772]: E1122 12:15:00.186724 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="extract-utilities" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.186731 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="extract-utilities" Nov 22 12:15:00 crc kubenswrapper[4772]: E1122 12:15:00.186772 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="registry-server" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.186778 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="registry-server" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.186955 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4add6394-1ece-4cde-bf6e-5294c5407707" containerName="registry-server" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.187702 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.189931 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9"] Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.190232 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.190514 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.326514 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cr6\" (UniqueName: \"kubernetes.io/projected/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-kube-api-access-t8cr6\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.326614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-config-volume\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.326686 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-secret-volume\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.428324 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-config-volume\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.428418 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-secret-volume\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.428502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8cr6\" (UniqueName: \"kubernetes.io/projected/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-kube-api-access-t8cr6\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.432171 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-config-volume\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.440868 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-secret-volume\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.448600 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8cr6\" (UniqueName: \"kubernetes.io/projected/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-kube-api-access-t8cr6\") pod \"collect-profiles-29396895-pxxr9\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:00 crc kubenswrapper[4772]: I1122 12:15:00.514292 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:01 crc kubenswrapper[4772]: I1122 12:15:01.015994 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9"] Nov 22 12:15:01 crc kubenswrapper[4772]: I1122 12:15:01.634706 4772 generic.go:334] "Generic (PLEG): container finished" podID="399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" containerID="91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59" exitCode=0 Nov 22 12:15:01 crc kubenswrapper[4772]: I1122 12:15:01.634750 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" event={"ID":"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a","Type":"ContainerDied","Data":"91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59"} Nov 22 12:15:01 crc kubenswrapper[4772]: I1122 12:15:01.635164 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" event={"ID":"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a","Type":"ContainerStarted","Data":"797a8c1fa7653b77fd43dd76b9060a825a178d686259fc7ea72edf8131d90ec9"} Nov 22 12:15:02 crc kubenswrapper[4772]: I1122 12:15:02.415183 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:15:02 crc kubenswrapper[4772]: E1122 12:15:02.416114 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.037606 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.182694 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-config-volume\") pod \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.182974 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8cr6\" (UniqueName: \"kubernetes.io/projected/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-kube-api-access-t8cr6\") pod \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.183080 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-secret-volume\") pod \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\" (UID: \"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a\") " Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.183525 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-config-volume" (OuterVolumeSpecName: "config-volume") pod "399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" (UID: "399bf4ec-b345-4e53-bb0d-7c0a90f3c12a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.183896 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.189253 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" (UID: "399bf4ec-b345-4e53-bb0d-7c0a90f3c12a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.196409 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-kube-api-access-t8cr6" (OuterVolumeSpecName: "kube-api-access-t8cr6") pod "399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" (UID: "399bf4ec-b345-4e53-bb0d-7c0a90f3c12a"). InnerVolumeSpecName "kube-api-access-t8cr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.286413 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8cr6\" (UniqueName: \"kubernetes.io/projected/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-kube-api-access-t8cr6\") on node \"crc\" DevicePath \"\"" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.286459 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.655147 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" event={"ID":"399bf4ec-b345-4e53-bb0d-7c0a90f3c12a","Type":"ContainerDied","Data":"797a8c1fa7653b77fd43dd76b9060a825a178d686259fc7ea72edf8131d90ec9"} Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.655196 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797a8c1fa7653b77fd43dd76b9060a825a178d686259fc7ea72edf8131d90ec9" Nov 22 12:15:03 crc kubenswrapper[4772]: I1122 12:15:03.655235 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9" Nov 22 12:15:04 crc kubenswrapper[4772]: I1122 12:15:04.146231 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n"] Nov 22 12:15:04 crc kubenswrapper[4772]: I1122 12:15:04.171421 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396850-gsq8n"] Nov 22 12:15:05 crc kubenswrapper[4772]: I1122 12:15:05.046469 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xlt45"] Nov 22 12:15:05 crc kubenswrapper[4772]: I1122 12:15:05.054285 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xlt45"] Nov 22 12:15:05 crc kubenswrapper[4772]: I1122 12:15:05.426870 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca8a424-737d-4d78-9817-77aa51dec7b6" path="/var/lib/kubelet/pods/6ca8a424-737d-4d78-9817-77aa51dec7b6/volumes" Nov 22 12:15:05 crc kubenswrapper[4772]: I1122 12:15:05.427646 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93f9210-92a8-41df-a182-062f47e6cc84" path="/var/lib/kubelet/pods/e93f9210-92a8-41df-a182-062f47e6cc84/volumes" Nov 22 12:15:09 crc kubenswrapper[4772]: E1122 12:15:09.459655 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-conmon-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:15:12 crc kubenswrapper[4772]: I1122 12:15:12.359385 4772 scope.go:117] "RemoveContainer" containerID="798f75d6f54b9011bfa5eb56966f2e0e71e3b2a793c1524629353dba0924a77a" Nov 22 12:15:12 crc kubenswrapper[4772]: I1122 12:15:12.384131 4772 scope.go:117] "RemoveContainer" containerID="c9837135adc76952a9076ea6373f35b26f1757d8533080c158eb817f367dc6cd" Nov 22 12:15:12 crc kubenswrapper[4772]: I1122 12:15:12.449170 4772 scope.go:117] "RemoveContainer" containerID="334c847ea14d0fb5c58650d1a827520dc57d495fc14b24b4363ac7196cbc091f" Nov 22 12:15:12 crc kubenswrapper[4772]: I1122 12:15:12.469824 4772 scope.go:117] "RemoveContainer" containerID="703c92b95c6dc77be51f836660e4fd53118b0e3e32d96042c5a2f6e36489339e" Nov 22 12:15:12 crc kubenswrapper[4772]: I1122 12:15:12.516176 4772 scope.go:117] "RemoveContainer" containerID="9a486af948b74a472a31bbc90e7f8140dd0056b001c63b20f5a3783a52bf1aa2" Nov 22 12:15:13 crc kubenswrapper[4772]: I1122 12:15:13.414458 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:15:13 crc kubenswrapper[4772]: E1122 12:15:13.415122 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:15:19 crc kubenswrapper[4772]: E1122 12:15:19.697402 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-conmon-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:15:28 crc kubenswrapper[4772]: I1122 12:15:28.413331 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:15:28 crc kubenswrapper[4772]: E1122 12:15:28.414708 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:15:30 crc kubenswrapper[4772]: E1122 12:15:30.022157 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-conmon-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:15:40 crc kubenswrapper[4772]: E1122 12:15:40.296484 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-conmon-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:15:40 crc kubenswrapper[4772]: I1122 12:15:40.414147 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:15:40 crc kubenswrapper[4772]: E1122 12:15:40.414458 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.892674 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s8m74"] Nov 22 12:15:47 crc kubenswrapper[4772]: E1122 12:15:47.896635 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" containerName="collect-profiles" Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.896782 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" containerName="collect-profiles" Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.897176 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" containerName="collect-profiles" Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.924984 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74" Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.945340 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.948251 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4rfp6" Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.975675 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s8m74"] Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.994072 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qbssf"] Nov 22 12:15:47 crc kubenswrapper[4772]: I1122 12:15:47.997105 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.003511 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qbssf"] Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104503 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54838e90-0c76-4bd7-b959-83229a23745d-scripts\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104565 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-log-ovn\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-run\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104693 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-log\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104733 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-lib\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104771 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-run\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104832 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4fm\" (UniqueName: \"kubernetes.io/projected/4221f6ce-957c-4881-8a78-a466cddc53e3-kube-api-access-zs4fm\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104866 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-etc-ovs\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104891 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndrjx\" (UniqueName: \"kubernetes.io/projected/54838e90-0c76-4bd7-b959-83229a23745d-kube-api-access-ndrjx\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104927 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-run-ovn\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.104955 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4221f6ce-957c-4881-8a78-a466cddc53e3-scripts\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.206997 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4fm\" (UniqueName: \"kubernetes.io/projected/4221f6ce-957c-4881-8a78-a466cddc53e3-kube-api-access-zs4fm\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.207534 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-etc-ovs\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.207686 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndrjx\" (UniqueName: \"kubernetes.io/projected/54838e90-0c76-4bd7-b959-83229a23745d-kube-api-access-ndrjx\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.207836 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-run-ovn\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.207951 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4221f6ce-957c-4881-8a78-a466cddc53e3-scripts\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.208116 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54838e90-0c76-4bd7-b959-83229a23745d-scripts\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.208269 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-log-ovn\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.208420 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-run\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.208510 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-run\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.207949 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-etc-ovs\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.208445 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-log-ovn\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.208007 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-run-ovn\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.209018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-log\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.209228 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-lib\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.209415 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-run\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.209309 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-lib\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.209235 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4221f6ce-957c-4881-8a78-a466cddc53e3-var-log\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.209509 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/54838e90-0c76-4bd7-b959-83229a23745d-var-run\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.210165 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4221f6ce-957c-4881-8a78-a466cddc53e3-scripts\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.210675 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54838e90-0c76-4bd7-b959-83229a23745d-scripts\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.230953 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4fm\" (UniqueName: \"kubernetes.io/projected/4221f6ce-957c-4881-8a78-a466cddc53e3-kube-api-access-zs4fm\") pod \"ovn-controller-ovs-qbssf\" (UID: \"4221f6ce-957c-4881-8a78-a466cddc53e3\") " pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.242217 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndrjx\" (UniqueName: \"kubernetes.io/projected/54838e90-0c76-4bd7-b959-83229a23745d-kube-api-access-ndrjx\") pod \"ovn-controller-s8m74\" (UID: \"54838e90-0c76-4bd7-b959-83229a23745d\") " pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.298659 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.321604 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:48 crc kubenswrapper[4772]: I1122 12:15:48.732959 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s8m74"] Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.116374 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74" event={"ID":"54838e90-0c76-4bd7-b959-83229a23745d","Type":"ContainerStarted","Data":"1cb0e985455252f847f55941322b6ef2481075ae947f9b8eaa52b319501633c8"} Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.116877 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74" event={"ID":"54838e90-0c76-4bd7-b959-83229a23745d","Type":"ContainerStarted","Data":"558911bb3f9e72c90c2564b8f593aa0f0434799ad59ecbd66f31d6fc8a545260"} Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.138480 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qbssf"] Nov 22 12:15:49 crc kubenswrapper[4772]: W1122 12:15:49.152943 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4221f6ce_957c_4881_8a78_a466cddc53e3.slice/crio-caae5dc9b0b62f624202425f871ba9354ba3cd29f0636f52c591a399e2154999 WatchSource:0}: Error finding container caae5dc9b0b62f624202425f871ba9354ba3cd29f0636f52c591a399e2154999: Status 404 returned error can't find the container with id caae5dc9b0b62f624202425f871ba9354ba3cd29f0636f52c591a399e2154999 Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.219256 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6gnhd"] Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.220771 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.223519 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.229911 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6gnhd"] Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.337310 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4a456388-57e0-45cc-a2a6-80ec40f01215-ovn-rundir\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.337368 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqh8\" (UniqueName: \"kubernetes.io/projected/4a456388-57e0-45cc-a2a6-80ec40f01215-kube-api-access-rzqh8\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.337634 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4a456388-57e0-45cc-a2a6-80ec40f01215-ovs-rundir\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.337818 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a456388-57e0-45cc-a2a6-80ec40f01215-config\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.440502 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4a456388-57e0-45cc-a2a6-80ec40f01215-ovs-rundir\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.440596 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a456388-57e0-45cc-a2a6-80ec40f01215-config\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.440676 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4a456388-57e0-45cc-a2a6-80ec40f01215-ovn-rundir\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.440722 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqh8\" (UniqueName: \"kubernetes.io/projected/4a456388-57e0-45cc-a2a6-80ec40f01215-kube-api-access-rzqh8\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.441146 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4a456388-57e0-45cc-a2a6-80ec40f01215-ovn-rundir\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.441155 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4a456388-57e0-45cc-a2a6-80ec40f01215-ovs-rundir\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.441721 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a456388-57e0-45cc-a2a6-80ec40f01215-config\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.469370 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqh8\" (UniqueName: \"kubernetes.io/projected/4a456388-57e0-45cc-a2a6-80ec40f01215-kube-api-access-rzqh8\") pod \"ovn-controller-metrics-6gnhd\" (UID: \"4a456388-57e0-45cc-a2a6-80ec40f01215\") " pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.587783 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6gnhd" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.695274 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-msqn9"] Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.696942 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-msqn9" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.716942 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-msqn9"] Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.860653 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5s95\" (UniqueName: \"kubernetes.io/projected/df4749fd-6d78-4f2e-ae1b-c892f3eb3e14-kube-api-access-f5s95\") pod \"octavia-db-create-msqn9\" (UID: \"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14\") " pod="openstack/octavia-db-create-msqn9" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.963440 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5s95\" (UniqueName: \"kubernetes.io/projected/df4749fd-6d78-4f2e-ae1b-c892f3eb3e14-kube-api-access-f5s95\") pod \"octavia-db-create-msqn9\" (UID: \"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14\") " pod="openstack/octavia-db-create-msqn9" Nov 22 12:15:49 crc kubenswrapper[4772]: I1122 12:15:49.989671 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5s95\" (UniqueName: \"kubernetes.io/projected/df4749fd-6d78-4f2e-ae1b-c892f3eb3e14-kube-api-access-f5s95\") pod \"octavia-db-create-msqn9\" (UID: \"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14\") " pod="openstack/octavia-db-create-msqn9" Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.046503 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-msqn9" Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.113769 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6gnhd"] Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.139005 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qbssf" event={"ID":"4221f6ce-957c-4881-8a78-a466cddc53e3","Type":"ContainerDied","Data":"7c599d73c85868907f2ba201eb32383061f94733c8f19fe114d3ea4d41e60337"} Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.138943 4772 generic.go:334] "Generic (PLEG): container finished" podID="4221f6ce-957c-4881-8a78-a466cddc53e3" containerID="7c599d73c85868907f2ba201eb32383061f94733c8f19fe114d3ea4d41e60337" exitCode=0 Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.139268 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qbssf" event={"ID":"4221f6ce-957c-4881-8a78-a466cddc53e3","Type":"ContainerStarted","Data":"caae5dc9b0b62f624202425f871ba9354ba3cd29f0636f52c591a399e2154999"} Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.139418 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-s8m74" Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.194257 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s8m74" podStartSLOduration=3.194239962 podStartE2EDuration="3.194239962s" podCreationTimestamp="2025-11-22 12:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:15:50.194060867 +0000 UTC m=+5870.433505371" watchObservedRunningTime="2025-11-22 12:15:50.194239962 +0000 UTC m=+5870.433684466" Nov 22 12:15:50 crc kubenswrapper[4772]: I1122 12:15:50.558558 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-msqn9"] Nov 22 12:15:50 crc kubenswrapper[4772]: E1122 12:15:50.626896 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-conmon-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.151064 4772 generic.go:334] "Generic (PLEG): container finished" podID="df4749fd-6d78-4f2e-ae1b-c892f3eb3e14" containerID="34518a421f0bb3e50aaa011270dfbe6220f4f11edfca82ecf7ef71d7ad7008e0" exitCode=0 Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.151164 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-msqn9" event={"ID":"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14","Type":"ContainerDied","Data":"34518a421f0bb3e50aaa011270dfbe6220f4f11edfca82ecf7ef71d7ad7008e0"} Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.151710 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-msqn9" event={"ID":"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14","Type":"ContainerStarted","Data":"f98cc6b5e765f6e4f2890ba03dfadc682317262b7c5e5411317ac932cfc28037"} Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.157278 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6gnhd" event={"ID":"4a456388-57e0-45cc-a2a6-80ec40f01215","Type":"ContainerStarted","Data":"3064a75b9b3d12051eacce3aad32f44b2a3ec8a6e16ba20bc2a291b46a1c64f2"} Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.157341 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6gnhd" event={"ID":"4a456388-57e0-45cc-a2a6-80ec40f01215","Type":"ContainerStarted","Data":"c4f5ed5ad76ee9be14651fbff48ef9d703d4a624c227b8365c2cb594787df224"} Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.161564 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qbssf" event={"ID":"4221f6ce-957c-4881-8a78-a466cddc53e3","Type":"ContainerStarted","Data":"dc21a27ade7631cf14d367ea991450f320daf062083f60fd5b219b75c310cb2b"} Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.161705 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.161723 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qbssf" event={"ID":"4221f6ce-957c-4881-8a78-a466cddc53e3","Type":"ContainerStarted","Data":"d4434340292d42f130822f042414939ae71b6c074f520f3de9e3b6bd889c2988"} Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.205194 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6gnhd" podStartSLOduration=2.205157721 podStartE2EDuration="2.205157721s" podCreationTimestamp="2025-11-22 12:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:15:51.197228073 +0000 UTC m=+5871.436672567" watchObservedRunningTime="2025-11-22 12:15:51.205157721 +0000 UTC m=+5871.444602235" Nov 22 12:15:51 crc kubenswrapper[4772]: I1122 12:15:51.240141 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qbssf" podStartSLOduration=4.240119091 podStartE2EDuration="4.240119091s" podCreationTimestamp="2025-11-22 12:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:15:51.233183179 +0000 UTC m=+5871.472627693" watchObservedRunningTime="2025-11-22 12:15:51.240119091 +0000 UTC m=+5871.479563575" Nov 22 12:15:52 crc kubenswrapper[4772]: I1122 12:15:52.175653 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:15:52 crc kubenswrapper[4772]: I1122 12:15:52.530141 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-msqn9" Nov 22 12:15:52 crc kubenswrapper[4772]: I1122 12:15:52.626321 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5s95\" (UniqueName: \"kubernetes.io/projected/df4749fd-6d78-4f2e-ae1b-c892f3eb3e14-kube-api-access-f5s95\") pod \"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14\" (UID: \"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14\") " Nov 22 12:15:52 crc kubenswrapper[4772]: I1122 12:15:52.641015 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4749fd-6d78-4f2e-ae1b-c892f3eb3e14-kube-api-access-f5s95" (OuterVolumeSpecName: "kube-api-access-f5s95") pod "df4749fd-6d78-4f2e-ae1b-c892f3eb3e14" (UID: "df4749fd-6d78-4f2e-ae1b-c892f3eb3e14"). InnerVolumeSpecName "kube-api-access-f5s95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:15:52 crc kubenswrapper[4772]: I1122 12:15:52.728710 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5s95\" (UniqueName: \"kubernetes.io/projected/df4749fd-6d78-4f2e-ae1b-c892f3eb3e14-kube-api-access-f5s95\") on node \"crc\" DevicePath \"\"" Nov 22 12:15:53 crc kubenswrapper[4772]: I1122 12:15:53.189870 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-msqn9" event={"ID":"df4749fd-6d78-4f2e-ae1b-c892f3eb3e14","Type":"ContainerDied","Data":"f98cc6b5e765f6e4f2890ba03dfadc682317262b7c5e5411317ac932cfc28037"} Nov 22 12:15:53 crc kubenswrapper[4772]: I1122 12:15:53.189921 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f98cc6b5e765f6e4f2890ba03dfadc682317262b7c5e5411317ac932cfc28037" Nov 22 12:15:53 crc kubenswrapper[4772]: I1122 12:15:53.189954 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-msqn9" Nov 22 12:15:55 crc kubenswrapper[4772]: I1122 12:15:55.414618 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:15:55 crc kubenswrapper[4772]: E1122 12:15:55.414837 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:16:00 crc kubenswrapper[4772]: E1122 12:16:00.899252 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-conmon-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399bf4ec_b345_4e53_bb0d_7c0a90f3c12a.slice/crio-91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.027767 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-b1c8-account-create-ngvvq"] Nov 22 12:16:01 crc kubenswrapper[4772]: E1122 12:16:01.028579 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4749fd-6d78-4f2e-ae1b-c892f3eb3e14" containerName="mariadb-database-create" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.028609 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4749fd-6d78-4f2e-ae1b-c892f3eb3e14" containerName="mariadb-database-create" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.028912 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4749fd-6d78-4f2e-ae1b-c892f3eb3e14" containerName="mariadb-database-create" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.029925 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b1c8-account-create-ngvvq" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.033780 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.038612 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b1c8-account-create-ngvvq"] Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.100419 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8zht\" (UniqueName: \"kubernetes.io/projected/42d9ec21-53a1-44f7-a74e-604ea29d6f12-kube-api-access-h8zht\") pod \"octavia-b1c8-account-create-ngvvq\" (UID: \"42d9ec21-53a1-44f7-a74e-604ea29d6f12\") " pod="openstack/octavia-b1c8-account-create-ngvvq" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.203031 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8zht\" (UniqueName: \"kubernetes.io/projected/42d9ec21-53a1-44f7-a74e-604ea29d6f12-kube-api-access-h8zht\") pod \"octavia-b1c8-account-create-ngvvq\" (UID: \"42d9ec21-53a1-44f7-a74e-604ea29d6f12\") " pod="openstack/octavia-b1c8-account-create-ngvvq" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.232938 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8zht\" (UniqueName: \"kubernetes.io/projected/42d9ec21-53a1-44f7-a74e-604ea29d6f12-kube-api-access-h8zht\") pod \"octavia-b1c8-account-create-ngvvq\" (UID: \"42d9ec21-53a1-44f7-a74e-604ea29d6f12\") " pod="openstack/octavia-b1c8-account-create-ngvvq" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.371192 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b1c8-account-create-ngvvq" Nov 22 12:16:01 crc kubenswrapper[4772]: W1122 12:16:01.876815 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d9ec21_53a1_44f7_a74e_604ea29d6f12.slice/crio-ab0d456ed5306b703ca257bc76daa5954aa08a15ab150fec29ba79cd69b7f315 WatchSource:0}: Error finding container ab0d456ed5306b703ca257bc76daa5954aa08a15ab150fec29ba79cd69b7f315: Status 404 returned error can't find the container with id ab0d456ed5306b703ca257bc76daa5954aa08a15ab150fec29ba79cd69b7f315 Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.883368 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Nov 22 12:16:01 crc kubenswrapper[4772]: I1122 12:16:01.884162 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b1c8-account-create-ngvvq"] Nov 22 12:16:02 crc kubenswrapper[4772]: I1122 12:16:02.306180 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b1c8-account-create-ngvvq" event={"ID":"42d9ec21-53a1-44f7-a74e-604ea29d6f12","Type":"ContainerStarted","Data":"ab0d456ed5306b703ca257bc76daa5954aa08a15ab150fec29ba79cd69b7f315"} Nov 22 12:16:03 crc kubenswrapper[4772]: I1122 12:16:03.322582 4772 generic.go:334] "Generic (PLEG): container finished" podID="42d9ec21-53a1-44f7-a74e-604ea29d6f12" containerID="dabcfcaa4b1601352bfe563fc95c72cfbcf6f1ed4c4c17aad63184919de40e69" exitCode=0 Nov 22 12:16:03 crc kubenswrapper[4772]: I1122 12:16:03.322658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b1c8-account-create-ngvvq" event={"ID":"42d9ec21-53a1-44f7-a74e-604ea29d6f12","Type":"ContainerDied","Data":"dabcfcaa4b1601352bfe563fc95c72cfbcf6f1ed4c4c17aad63184919de40e69"} Nov 22 12:16:04 crc kubenswrapper[4772]: I1122 12:16:04.686116 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b1c8-account-create-ngvvq" Nov 22 12:16:04 crc kubenswrapper[4772]: I1122 12:16:04.796754 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8zht\" (UniqueName: \"kubernetes.io/projected/42d9ec21-53a1-44f7-a74e-604ea29d6f12-kube-api-access-h8zht\") pod \"42d9ec21-53a1-44f7-a74e-604ea29d6f12\" (UID: \"42d9ec21-53a1-44f7-a74e-604ea29d6f12\") " Nov 22 12:16:04 crc kubenswrapper[4772]: I1122 12:16:04.804814 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d9ec21-53a1-44f7-a74e-604ea29d6f12-kube-api-access-h8zht" (OuterVolumeSpecName: "kube-api-access-h8zht") pod "42d9ec21-53a1-44f7-a74e-604ea29d6f12" (UID: "42d9ec21-53a1-44f7-a74e-604ea29d6f12"). InnerVolumeSpecName "kube-api-access-h8zht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:16:04 crc kubenswrapper[4772]: I1122 12:16:04.899638 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8zht\" (UniqueName: \"kubernetes.io/projected/42d9ec21-53a1-44f7-a74e-604ea29d6f12-kube-api-access-h8zht\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:05 crc kubenswrapper[4772]: I1122 12:16:05.344316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b1c8-account-create-ngvvq" event={"ID":"42d9ec21-53a1-44f7-a74e-604ea29d6f12","Type":"ContainerDied","Data":"ab0d456ed5306b703ca257bc76daa5954aa08a15ab150fec29ba79cd69b7f315"} Nov 22 12:16:05 crc kubenswrapper[4772]: I1122 12:16:05.344359 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b1c8-account-create-ngvvq" Nov 22 12:16:05 crc kubenswrapper[4772]: I1122 12:16:05.344368 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab0d456ed5306b703ca257bc76daa5954aa08a15ab150fec29ba79cd69b7f315" Nov 22 12:16:06 crc kubenswrapper[4772]: I1122 12:16:06.779922 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-76ctp"] Nov 22 12:16:06 crc kubenswrapper[4772]: E1122 12:16:06.780528 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d9ec21-53a1-44f7-a74e-604ea29d6f12" containerName="mariadb-account-create" Nov 22 12:16:06 crc kubenswrapper[4772]: I1122 12:16:06.780550 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d9ec21-53a1-44f7-a74e-604ea29d6f12" containerName="mariadb-account-create" Nov 22 12:16:06 crc kubenswrapper[4772]: I1122 12:16:06.780951 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d9ec21-53a1-44f7-a74e-604ea29d6f12" containerName="mariadb-account-create" Nov 22 12:16:06 crc kubenswrapper[4772]: I1122 12:16:06.782043 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-76ctp" Nov 22 12:16:06 crc kubenswrapper[4772]: I1122 12:16:06.792998 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-76ctp"] Nov 22 12:16:06 crc kubenswrapper[4772]: I1122 12:16:06.885545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtl55\" (UniqueName: \"kubernetes.io/projected/66d983c0-8a49-4d7e-a334-77c163aa2a2c-kube-api-access-xtl55\") pod \"octavia-persistence-db-create-76ctp\" (UID: \"66d983c0-8a49-4d7e-a334-77c163aa2a2c\") " pod="openstack/octavia-persistence-db-create-76ctp" Nov 22 12:16:06 crc kubenswrapper[4772]: I1122 12:16:06.988187 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtl55\" (UniqueName: \"kubernetes.io/projected/66d983c0-8a49-4d7e-a334-77c163aa2a2c-kube-api-access-xtl55\") pod \"octavia-persistence-db-create-76ctp\" (UID: \"66d983c0-8a49-4d7e-a334-77c163aa2a2c\") " pod="openstack/octavia-persistence-db-create-76ctp" Nov 22 12:16:07 crc kubenswrapper[4772]: I1122 12:16:07.009472 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtl55\" (UniqueName: \"kubernetes.io/projected/66d983c0-8a49-4d7e-a334-77c163aa2a2c-kube-api-access-xtl55\") pod \"octavia-persistence-db-create-76ctp\" (UID: \"66d983c0-8a49-4d7e-a334-77c163aa2a2c\") " pod="openstack/octavia-persistence-db-create-76ctp" Nov 22 12:16:07 crc kubenswrapper[4772]: I1122 12:16:07.109937 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-76ctp" Nov 22 12:16:07 crc kubenswrapper[4772]: I1122 12:16:07.605969 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-76ctp"] Nov 22 12:16:08 crc kubenswrapper[4772]: I1122 12:16:08.410085 4772 generic.go:334] "Generic (PLEG): container finished" podID="66d983c0-8a49-4d7e-a334-77c163aa2a2c" containerID="01582daa142b5f09aae31994ef00f53c8cfbf69f6edaa5efffb78d6a56140563" exitCode=0 Nov 22 12:16:08 crc kubenswrapper[4772]: I1122 12:16:08.410185 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-76ctp" event={"ID":"66d983c0-8a49-4d7e-a334-77c163aa2a2c","Type":"ContainerDied","Data":"01582daa142b5f09aae31994ef00f53c8cfbf69f6edaa5efffb78d6a56140563"} Nov 22 12:16:08 crc kubenswrapper[4772]: I1122 12:16:08.410713 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-76ctp" event={"ID":"66d983c0-8a49-4d7e-a334-77c163aa2a2c","Type":"ContainerStarted","Data":"3438ffae461125d82f0f532c26a970acf4c8a7e3d2462993a5cd6ae87da01232"} Nov 22 12:16:08 crc kubenswrapper[4772]: I1122 12:16:08.414501 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:16:08 crc kubenswrapper[4772]: E1122 12:16:08.415112 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:16:09 crc kubenswrapper[4772]: I1122 12:16:09.821392 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-76ctp" Nov 22 12:16:09 crc kubenswrapper[4772]: I1122 12:16:09.956651 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtl55\" (UniqueName: \"kubernetes.io/projected/66d983c0-8a49-4d7e-a334-77c163aa2a2c-kube-api-access-xtl55\") pod \"66d983c0-8a49-4d7e-a334-77c163aa2a2c\" (UID: \"66d983c0-8a49-4d7e-a334-77c163aa2a2c\") " Nov 22 12:16:09 crc kubenswrapper[4772]: I1122 12:16:09.969317 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d983c0-8a49-4d7e-a334-77c163aa2a2c-kube-api-access-xtl55" (OuterVolumeSpecName: "kube-api-access-xtl55") pod "66d983c0-8a49-4d7e-a334-77c163aa2a2c" (UID: "66d983c0-8a49-4d7e-a334-77c163aa2a2c"). InnerVolumeSpecName "kube-api-access-xtl55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:16:10 crc kubenswrapper[4772]: I1122 12:16:10.058958 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtl55\" (UniqueName: \"kubernetes.io/projected/66d983c0-8a49-4d7e-a334-77c163aa2a2c-kube-api-access-xtl55\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:10 crc kubenswrapper[4772]: I1122 12:16:10.455242 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-76ctp" event={"ID":"66d983c0-8a49-4d7e-a334-77c163aa2a2c","Type":"ContainerDied","Data":"3438ffae461125d82f0f532c26a970acf4c8a7e3d2462993a5cd6ae87da01232"} Nov 22 12:16:10 crc kubenswrapper[4772]: I1122 12:16:10.455336 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3438ffae461125d82f0f532c26a970acf4c8a7e3d2462993a5cd6ae87da01232" Nov 22 12:16:10 crc kubenswrapper[4772]: I1122 12:16:10.455305 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-76ctp" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.240146 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-a103-account-create-ztwqx"] Nov 22 12:16:17 crc kubenswrapper[4772]: E1122 12:16:17.241258 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d983c0-8a49-4d7e-a334-77c163aa2a2c" containerName="mariadb-database-create" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.241277 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d983c0-8a49-4d7e-a334-77c163aa2a2c" containerName="mariadb-database-create" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.241540 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d983c0-8a49-4d7e-a334-77c163aa2a2c" containerName="mariadb-database-create" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.242329 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-a103-account-create-ztwqx" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.250772 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.251456 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-a103-account-create-ztwqx"] Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.367202 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gdrf\" (UniqueName: \"kubernetes.io/projected/d6687721-9ca9-43a9-ac6a-1c0f185ced62-kube-api-access-4gdrf\") pod \"octavia-a103-account-create-ztwqx\" (UID: \"d6687721-9ca9-43a9-ac6a-1c0f185ced62\") " pod="openstack/octavia-a103-account-create-ztwqx" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.472194 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gdrf\" (UniqueName: \"kubernetes.io/projected/d6687721-9ca9-43a9-ac6a-1c0f185ced62-kube-api-access-4gdrf\") pod \"octavia-a103-account-create-ztwqx\" (UID: \"d6687721-9ca9-43a9-ac6a-1c0f185ced62\") " pod="openstack/octavia-a103-account-create-ztwqx" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.496147 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gdrf\" (UniqueName: \"kubernetes.io/projected/d6687721-9ca9-43a9-ac6a-1c0f185ced62-kube-api-access-4gdrf\") pod \"octavia-a103-account-create-ztwqx\" (UID: \"d6687721-9ca9-43a9-ac6a-1c0f185ced62\") " pod="openstack/octavia-a103-account-create-ztwqx" Nov 22 12:16:17 crc kubenswrapper[4772]: I1122 12:16:17.560667 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-a103-account-create-ztwqx" Nov 22 12:16:18 crc kubenswrapper[4772]: I1122 12:16:18.039342 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-a103-account-create-ztwqx"] Nov 22 12:16:18 crc kubenswrapper[4772]: W1122 12:16:18.044189 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6687721_9ca9_43a9_ac6a_1c0f185ced62.slice/crio-b19426e8d9d10b5b86c2ed33ace513dfea8137f16920e5d207c11ac1af152e95 WatchSource:0}: Error finding container b19426e8d9d10b5b86c2ed33ace513dfea8137f16920e5d207c11ac1af152e95: Status 404 returned error can't find the container with id b19426e8d9d10b5b86c2ed33ace513dfea8137f16920e5d207c11ac1af152e95 Nov 22 12:16:18 crc kubenswrapper[4772]: I1122 12:16:18.555167 4772 generic.go:334] "Generic (PLEG): container finished" podID="d6687721-9ca9-43a9-ac6a-1c0f185ced62" containerID="3b290fa8bf70de9cef440e3667cce7d32c6a6e2f8f30155f9275db16ce67cd50" exitCode=0 Nov 22 12:16:18 crc kubenswrapper[4772]: I1122 12:16:18.555250 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-a103-account-create-ztwqx" event={"ID":"d6687721-9ca9-43a9-ac6a-1c0f185ced62","Type":"ContainerDied","Data":"3b290fa8bf70de9cef440e3667cce7d32c6a6e2f8f30155f9275db16ce67cd50"} Nov 22 12:16:18 crc kubenswrapper[4772]: I1122 12:16:18.555854 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-a103-account-create-ztwqx" event={"ID":"d6687721-9ca9-43a9-ac6a-1c0f185ced62","Type":"ContainerStarted","Data":"b19426e8d9d10b5b86c2ed33ace513dfea8137f16920e5d207c11ac1af152e95"} Nov 22 12:16:19 crc kubenswrapper[4772]: I1122 12:16:19.905308 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-a103-account-create-ztwqx" Nov 22 12:16:20 crc kubenswrapper[4772]: I1122 12:16:20.031419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gdrf\" (UniqueName: \"kubernetes.io/projected/d6687721-9ca9-43a9-ac6a-1c0f185ced62-kube-api-access-4gdrf\") pod \"d6687721-9ca9-43a9-ac6a-1c0f185ced62\" (UID: \"d6687721-9ca9-43a9-ac6a-1c0f185ced62\") " Nov 22 12:16:20 crc kubenswrapper[4772]: I1122 12:16:20.038298 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6687721-9ca9-43a9-ac6a-1c0f185ced62-kube-api-access-4gdrf" (OuterVolumeSpecName: "kube-api-access-4gdrf") pod "d6687721-9ca9-43a9-ac6a-1c0f185ced62" (UID: "d6687721-9ca9-43a9-ac6a-1c0f185ced62"). InnerVolumeSpecName "kube-api-access-4gdrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:16:20 crc kubenswrapper[4772]: I1122 12:16:20.134968 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gdrf\" (UniqueName: \"kubernetes.io/projected/d6687721-9ca9-43a9-ac6a-1c0f185ced62-kube-api-access-4gdrf\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:20 crc kubenswrapper[4772]: I1122 12:16:20.577224 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-a103-account-create-ztwqx" event={"ID":"d6687721-9ca9-43a9-ac6a-1c0f185ced62","Type":"ContainerDied","Data":"b19426e8d9d10b5b86c2ed33ace513dfea8137f16920e5d207c11ac1af152e95"} Nov 22 12:16:20 crc kubenswrapper[4772]: I1122 12:16:20.577265 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b19426e8d9d10b5b86c2ed33ace513dfea8137f16920e5d207c11ac1af152e95" Nov 22 12:16:20 crc kubenswrapper[4772]: I1122 12:16:20.577291 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-a103-account-create-ztwqx" Nov 22 12:16:22 crc kubenswrapper[4772]: I1122 12:16:22.414337 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:16:22 crc kubenswrapper[4772]: E1122 12:16:22.417094 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.344285 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s8m74" podUID="54838e90-0c76-4bd7-b959-83229a23745d" containerName="ovn-controller" probeResult="failure" output=< Nov 22 12:16:23 crc kubenswrapper[4772]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 12:16:23 crc kubenswrapper[4772]: > Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.374844 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.376480 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qbssf" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.510556 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s8m74-config-xdldl"] Nov 22 12:16:23 crc kubenswrapper[4772]: E1122 12:16:23.511038 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6687721-9ca9-43a9-ac6a-1c0f185ced62" containerName="mariadb-account-create" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.511073 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6687721-9ca9-43a9-ac6a-1c0f185ced62" containerName="mariadb-account-create" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.511355 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6687721-9ca9-43a9-ac6a-1c0f185ced62" containerName="mariadb-account-create" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.512184 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.514465 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.530316 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s8m74-config-xdldl"] Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.608691 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.608834 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbbr\" (UniqueName: \"kubernetes.io/projected/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-kube-api-access-mvbbr\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.609150 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run-ovn\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.609231 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-scripts\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.609272 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-additional-scripts\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.609306 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-log-ovn\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710475 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run-ovn\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710551 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-scripts\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710587 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-additional-scripts\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710616 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-log-ovn\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710665 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710693 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbbr\" (UniqueName: \"kubernetes.io/projected/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-kube-api-access-mvbbr\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710824 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run-ovn\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.710921 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-log-ovn\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.711068 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.711688 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-additional-scripts\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.713307 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-scripts\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.735823 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbbr\" (UniqueName: \"kubernetes.io/projected/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-kube-api-access-mvbbr\") pod \"ovn-controller-s8m74-config-xdldl\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.841955 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.865744 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-67f985d648-m9x49"] Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.868516 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.873384 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.876397 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-cpsv2" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.876744 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.881877 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-67f985d648-m9x49"] Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.915510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-config-data\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.915545 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64c0135c-1c06-4137-b911-951fc37de7e1-config-data-merged\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.915638 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/64c0135c-1c06-4137-b911-951fc37de7e1-octavia-run\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.915658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-scripts\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:23 crc kubenswrapper[4772]: I1122 12:16:23.915713 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-combined-ca-bundle\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.017905 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-scripts\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.018039 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-combined-ca-bundle\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.018219 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-config-data\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.018242 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64c0135c-1c06-4137-b911-951fc37de7e1-config-data-merged\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.018444 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/64c0135c-1c06-4137-b911-951fc37de7e1-octavia-run\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.019188 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/64c0135c-1c06-4137-b911-951fc37de7e1-octavia-run\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.020454 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64c0135c-1c06-4137-b911-951fc37de7e1-config-data-merged\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.026157 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-combined-ca-bundle\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.026855 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-config-data\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.027185 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64c0135c-1c06-4137-b911-951fc37de7e1-scripts\") pod \"octavia-api-67f985d648-m9x49\" (UID: \"64c0135c-1c06-4137-b911-951fc37de7e1\") " pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.282522 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.363367 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s8m74-config-xdldl"] Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.617203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74-config-xdldl" event={"ID":"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36","Type":"ContainerStarted","Data":"2cbe457f0337a50fac6d26bc2337abbeb8d28e80d0a148f05c6aa5392fe3be77"} Nov 22 12:16:24 crc kubenswrapper[4772]: I1122 12:16:24.746990 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-67f985d648-m9x49"] Nov 22 12:16:24 crc kubenswrapper[4772]: W1122 12:16:24.757551 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c0135c_1c06_4137_b911_951fc37de7e1.slice/crio-dc4b16ad8cc7ab8388bf0ef955551429f7c5abfbcff1e81b6186a751cce059cc WatchSource:0}: Error finding container dc4b16ad8cc7ab8388bf0ef955551429f7c5abfbcff1e81b6186a751cce059cc: Status 404 returned error can't find the container with id dc4b16ad8cc7ab8388bf0ef955551429f7c5abfbcff1e81b6186a751cce059cc Nov 22 12:16:25 crc kubenswrapper[4772]: I1122 12:16:25.632798 4772 generic.go:334] "Generic (PLEG): container finished" podID="71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" containerID="894d23048ff0a132aa4eca626b6a829f8777ad840d9d30544c27f62b5e810994" exitCode=0 Nov 22 12:16:25 crc kubenswrapper[4772]: I1122 12:16:25.632835 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74-config-xdldl" event={"ID":"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36","Type":"ContainerDied","Data":"894d23048ff0a132aa4eca626b6a829f8777ad840d9d30544c27f62b5e810994"} Nov 22 12:16:25 crc kubenswrapper[4772]: I1122 12:16:25.635471 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-67f985d648-m9x49" event={"ID":"64c0135c-1c06-4137-b911-951fc37de7e1","Type":"ContainerStarted","Data":"dc4b16ad8cc7ab8388bf0ef955551429f7c5abfbcff1e81b6186a751cce059cc"} Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.007575 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.187451 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run\") pod \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.187811 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-additional-scripts\") pod \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.187612 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run" (OuterVolumeSpecName: "var-run") pod "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" (UID: "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.187907 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-log-ovn\") pod \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.187997 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvbbr\" (UniqueName: \"kubernetes.io/projected/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-kube-api-access-mvbbr\") pod \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188074 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-scripts\") pod \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188089 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run-ovn\") pod \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\" (UID: \"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36\") " Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188079 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" (UID: "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188161 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" (UID: "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188789 4772 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188815 4772 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188849 4772 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.188840 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" (UID: "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.189225 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-scripts" (OuterVolumeSpecName: "scripts") pod "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" (UID: "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.208567 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-kube-api-access-mvbbr" (OuterVolumeSpecName: "kube-api-access-mvbbr") pod "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" (UID: "71a67a3d-5cfe-4ee9-89b7-771d54bc5d36"). InnerVolumeSpecName "kube-api-access-mvbbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.290382 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvbbr\" (UniqueName: \"kubernetes.io/projected/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-kube-api-access-mvbbr\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.290416 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.290425 4772 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.656996 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74-config-xdldl" event={"ID":"71a67a3d-5cfe-4ee9-89b7-771d54bc5d36","Type":"ContainerDied","Data":"2cbe457f0337a50fac6d26bc2337abbeb8d28e80d0a148f05c6aa5392fe3be77"} Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.657066 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cbe457f0337a50fac6d26bc2337abbeb8d28e80d0a148f05c6aa5392fe3be77" Nov 22 12:16:27 crc kubenswrapper[4772]: I1122 12:16:27.657144 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-xdldl" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.100668 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s8m74-config-xdldl"] Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.108154 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s8m74-config-xdldl"] Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.173422 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s8m74-config-pchxm"] Nov 22 12:16:28 crc kubenswrapper[4772]: E1122 12:16:28.174027 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" containerName="ovn-config" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.174063 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" containerName="ovn-config" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.174416 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" containerName="ovn-config" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.175550 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.177967 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.202918 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s8m74-config-pchxm"] Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.313073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-additional-scripts\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.313183 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-scripts\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.313233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.313418 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvftn\" (UniqueName: \"kubernetes.io/projected/61754941-f93b-4293-bfb7-860b83e912df-kube-api-access-wvftn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.313513 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-log-ovn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.313578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run-ovn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415576 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvftn\" (UniqueName: \"kubernetes.io/projected/61754941-f93b-4293-bfb7-860b83e912df-kube-api-access-wvftn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-log-ovn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415660 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run-ovn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-additional-scripts\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415760 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-scripts\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415956 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-log-ovn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415956 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run-ovn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.415971 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.416900 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-additional-scripts\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.417848 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-scripts\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.437353 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvftn\" (UniqueName: \"kubernetes.io/projected/61754941-f93b-4293-bfb7-860b83e912df-kube-api-access-wvftn\") pod \"ovn-controller-s8m74-config-pchxm\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.460782 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-s8m74" Nov 22 12:16:28 crc kubenswrapper[4772]: I1122 12:16:28.497378 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:29 crc kubenswrapper[4772]: I1122 12:16:29.427775 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a67a3d-5cfe-4ee9-89b7-771d54bc5d36" path="/var/lib/kubelet/pods/71a67a3d-5cfe-4ee9-89b7-771d54bc5d36/volumes" Nov 22 12:16:34 crc kubenswrapper[4772]: I1122 12:16:34.291257 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s8m74-config-pchxm"] Nov 22 12:16:34 crc kubenswrapper[4772]: I1122 12:16:34.413706 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:16:34 crc kubenswrapper[4772]: E1122 12:16:34.414405 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:16:34 crc kubenswrapper[4772]: I1122 12:16:34.735559 4772 generic.go:334] "Generic (PLEG): container finished" podID="64c0135c-1c06-4137-b911-951fc37de7e1" containerID="db3a9ca30c67677c529684bffb1d4a326fcf0e693f439fd624e1a17e9e6c3bdb" exitCode=0 Nov 22 12:16:34 crc kubenswrapper[4772]: I1122 12:16:34.735657 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-67f985d648-m9x49" event={"ID":"64c0135c-1c06-4137-b911-951fc37de7e1","Type":"ContainerDied","Data":"db3a9ca30c67677c529684bffb1d4a326fcf0e693f439fd624e1a17e9e6c3bdb"} Nov 22 12:16:34 crc kubenswrapper[4772]: I1122 12:16:34.739633 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74-config-pchxm" event={"ID":"61754941-f93b-4293-bfb7-860b83e912df","Type":"ContainerStarted","Data":"e2ddbbd2eaf0f4baea9efc11d8fdd178d399f4e55f377a2dc5ea86fb06c19e47"} Nov 22 12:16:34 crc kubenswrapper[4772]: I1122 12:16:34.739684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74-config-pchxm" event={"ID":"61754941-f93b-4293-bfb7-860b83e912df","Type":"ContainerStarted","Data":"6e86a180909d7360f37595a4eec7633ac16c31f1b0eb3fdaead85c63204e5132"} Nov 22 12:16:34 crc kubenswrapper[4772]: I1122 12:16:34.783358 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s8m74-config-pchxm" podStartSLOduration=6.783335603 podStartE2EDuration="6.783335603s" podCreationTimestamp="2025-11-22 12:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:16:34.776735509 +0000 UTC m=+5915.016180003" watchObservedRunningTime="2025-11-22 12:16:34.783335603 +0000 UTC m=+5915.022780097" Nov 22 12:16:35 crc kubenswrapper[4772]: I1122 12:16:35.753757 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-67f985d648-m9x49" event={"ID":"64c0135c-1c06-4137-b911-951fc37de7e1","Type":"ContainerStarted","Data":"c64d39e1370cb9ba33d691e3552f6c330639c171fc80c28f13c89e3eaabe4275"} Nov 22 12:16:35 crc kubenswrapper[4772]: I1122 12:16:35.754371 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-67f985d648-m9x49" event={"ID":"64c0135c-1c06-4137-b911-951fc37de7e1","Type":"ContainerStarted","Data":"07834672fee7c3bbf3cb9c6265f3e4dcb438e5a1d99eb5c921bd93f5f3c194cb"} Nov 22 12:16:35 crc kubenswrapper[4772]: I1122 12:16:35.754445 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:35 crc kubenswrapper[4772]: I1122 12:16:35.754477 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:35 crc kubenswrapper[4772]: I1122 12:16:35.755506 4772 generic.go:334] "Generic (PLEG): container finished" podID="61754941-f93b-4293-bfb7-860b83e912df" containerID="e2ddbbd2eaf0f4baea9efc11d8fdd178d399f4e55f377a2dc5ea86fb06c19e47" exitCode=0 Nov 22 12:16:35 crc kubenswrapper[4772]: I1122 12:16:35.755571 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s8m74-config-pchxm" event={"ID":"61754941-f93b-4293-bfb7-860b83e912df","Type":"ContainerDied","Data":"e2ddbbd2eaf0f4baea9efc11d8fdd178d399f4e55f377a2dc5ea86fb06c19e47"} Nov 22 12:16:35 crc kubenswrapper[4772]: I1122 12:16:35.782325 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-67f985d648-m9x49" podStartSLOduration=3.663521688 podStartE2EDuration="12.782297376s" podCreationTimestamp="2025-11-22 12:16:23 +0000 UTC" firstStartedPulling="2025-11-22 12:16:24.760178198 +0000 UTC m=+5904.999622692" lastFinishedPulling="2025-11-22 12:16:33.878953886 +0000 UTC m=+5914.118398380" observedRunningTime="2025-11-22 12:16:35.774798659 +0000 UTC m=+5916.014243153" watchObservedRunningTime="2025-11-22 12:16:35.782297376 +0000 UTC m=+5916.021741870" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.144882 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.220382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvftn\" (UniqueName: \"kubernetes.io/projected/61754941-f93b-4293-bfb7-860b83e912df-kube-api-access-wvftn\") pod \"61754941-f93b-4293-bfb7-860b83e912df\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.220655 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-scripts\") pod \"61754941-f93b-4293-bfb7-860b83e912df\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.220792 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run\") pod \"61754941-f93b-4293-bfb7-860b83e912df\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.220868 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run-ovn\") pod \"61754941-f93b-4293-bfb7-860b83e912df\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.220904 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run" (OuterVolumeSpecName: "var-run") pod "61754941-f93b-4293-bfb7-860b83e912df" (UID: "61754941-f93b-4293-bfb7-860b83e912df"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.220961 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-log-ovn\") pod \"61754941-f93b-4293-bfb7-860b83e912df\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221029 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "61754941-f93b-4293-bfb7-860b83e912df" (UID: "61754941-f93b-4293-bfb7-860b83e912df"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221124 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-additional-scripts\") pod \"61754941-f93b-4293-bfb7-860b83e912df\" (UID: \"61754941-f93b-4293-bfb7-860b83e912df\") " Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221215 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "61754941-f93b-4293-bfb7-860b83e912df" (UID: "61754941-f93b-4293-bfb7-860b83e912df"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221721 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "61754941-f93b-4293-bfb7-860b83e912df" (UID: "61754941-f93b-4293-bfb7-860b83e912df"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221811 4772 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221864 4772 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221878 4772 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61754941-f93b-4293-bfb7-860b83e912df-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221893 4772 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.221959 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-scripts" (OuterVolumeSpecName: "scripts") pod "61754941-f93b-4293-bfb7-860b83e912df" (UID: "61754941-f93b-4293-bfb7-860b83e912df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.227305 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61754941-f93b-4293-bfb7-860b83e912df-kube-api-access-wvftn" (OuterVolumeSpecName: "kube-api-access-wvftn") pod "61754941-f93b-4293-bfb7-860b83e912df" (UID: "61754941-f93b-4293-bfb7-860b83e912df"). InnerVolumeSpecName "kube-api-access-wvftn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.324658 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvftn\" (UniqueName: \"kubernetes.io/projected/61754941-f93b-4293-bfb7-860b83e912df-kube-api-access-wvftn\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.324709 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61754941-f93b-4293-bfb7-860b83e912df-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.372923 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s8m74-config-pchxm"] Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.392580 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s8m74-config-pchxm"] Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.425339 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61754941-f93b-4293-bfb7-860b83e912df" path="/var/lib/kubelet/pods/61754941-f93b-4293-bfb7-860b83e912df/volumes" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.777382 4772 scope.go:117] "RemoveContainer" containerID="e2ddbbd2eaf0f4baea9efc11d8fdd178d399f4e55f377a2dc5ea86fb06c19e47" Nov 22 12:16:37 crc kubenswrapper[4772]: I1122 12:16:37.777419 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s8m74-config-pchxm" Nov 22 12:16:43 crc kubenswrapper[4772]: I1122 12:16:43.573089 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.273565 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-2zvhj"] Nov 22 12:16:47 crc kubenswrapper[4772]: E1122 12:16:47.274773 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61754941-f93b-4293-bfb7-860b83e912df" containerName="ovn-config" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.274799 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="61754941-f93b-4293-bfb7-860b83e912df" containerName="ovn-config" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.275275 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="61754941-f93b-4293-bfb7-860b83e912df" containerName="ovn-config" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.277324 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.280924 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.281749 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.281782 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.308581 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-2zvhj"] Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.371657 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/623bfb17-7669-4a36-bdf9-baae1d6afbf9-hm-ports\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.371725 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/623bfb17-7669-4a36-bdf9-baae1d6afbf9-config-data-merged\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.371834 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/623bfb17-7669-4a36-bdf9-baae1d6afbf9-scripts\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.371898 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/623bfb17-7669-4a36-bdf9-baae1d6afbf9-config-data\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.473548 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/623bfb17-7669-4a36-bdf9-baae1d6afbf9-config-data-merged\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.473749 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/623bfb17-7669-4a36-bdf9-baae1d6afbf9-scripts\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.473846 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/623bfb17-7669-4a36-bdf9-baae1d6afbf9-config-data\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.473923 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/623bfb17-7669-4a36-bdf9-baae1d6afbf9-hm-ports\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.474582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/623bfb17-7669-4a36-bdf9-baae1d6afbf9-config-data-merged\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.475389 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/623bfb17-7669-4a36-bdf9-baae1d6afbf9-hm-ports\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.485628 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/623bfb17-7669-4a36-bdf9-baae1d6afbf9-scripts\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.493587 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/623bfb17-7669-4a36-bdf9-baae1d6afbf9-config-data\") pod \"octavia-rsyslog-2zvhj\" (UID: \"623bfb17-7669-4a36-bdf9-baae1d6afbf9\") " pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.596486 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.880660 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vwllg"] Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.890214 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.899195 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.906346 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vwllg"] Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.953516 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-67f985d648-m9x49" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.985791 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/695c0702-b8d4-4bdf-8268-3e5898f3d789-httpd-config\") pod \"octavia-image-upload-59f8cff499-vwllg\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:47 crc kubenswrapper[4772]: I1122 12:16:47.986002 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/695c0702-b8d4-4bdf-8268-3e5898f3d789-amphora-image\") pod \"octavia-image-upload-59f8cff499-vwllg\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.088862 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/695c0702-b8d4-4bdf-8268-3e5898f3d789-amphora-image\") pod \"octavia-image-upload-59f8cff499-vwllg\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.089438 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/695c0702-b8d4-4bdf-8268-3e5898f3d789-httpd-config\") pod \"octavia-image-upload-59f8cff499-vwllg\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.090248 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/695c0702-b8d4-4bdf-8268-3e5898f3d789-amphora-image\") pod \"octavia-image-upload-59f8cff499-vwllg\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.109408 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/695c0702-b8d4-4bdf-8268-3e5898f3d789-httpd-config\") pod \"octavia-image-upload-59f8cff499-vwllg\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.156107 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-2zvhj"] Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.218841 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.850559 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vwllg"] Nov 22 12:16:48 crc kubenswrapper[4772]: W1122 12:16:48.860431 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695c0702_b8d4_4bdf_8268_3e5898f3d789.slice/crio-8a1b8f455d4951474d927b43b14b135ee1f709c6de368fe5ee87324d84d03d1b WatchSource:0}: Error finding container 8a1b8f455d4951474d927b43b14b135ee1f709c6de368fe5ee87324d84d03d1b: Status 404 returned error can't find the container with id 8a1b8f455d4951474d927b43b14b135ee1f709c6de368fe5ee87324d84d03d1b Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.887345 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-2zvhj" event={"ID":"623bfb17-7669-4a36-bdf9-baae1d6afbf9","Type":"ContainerStarted","Data":"934a5d3fb395921f89236933e9513e5553136799aadc7a0a7ab227fcc10f134a"} Nov 22 12:16:48 crc kubenswrapper[4772]: I1122 12:16:48.888335 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vwllg" event={"ID":"695c0702-b8d4-4bdf-8268-3e5898f3d789","Type":"ContainerStarted","Data":"8a1b8f455d4951474d927b43b14b135ee1f709c6de368fe5ee87324d84d03d1b"} Nov 22 12:16:49 crc kubenswrapper[4772]: I1122 12:16:49.419354 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:16:49 crc kubenswrapper[4772]: E1122 12:16:49.420186 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:16:50 crc kubenswrapper[4772]: I1122 12:16:50.911520 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-2zvhj" event={"ID":"623bfb17-7669-4a36-bdf9-baae1d6afbf9","Type":"ContainerStarted","Data":"0be1975b2c5567c1abb0abc2daeb6dae6f164909f00b321aef77fe7b61a22ccf"} Nov 22 12:16:52 crc kubenswrapper[4772]: I1122 12:16:52.935732 4772 generic.go:334] "Generic (PLEG): container finished" podID="623bfb17-7669-4a36-bdf9-baae1d6afbf9" containerID="0be1975b2c5567c1abb0abc2daeb6dae6f164909f00b321aef77fe7b61a22ccf" exitCode=0 Nov 22 12:16:52 crc kubenswrapper[4772]: I1122 12:16:52.935826 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-2zvhj" event={"ID":"623bfb17-7669-4a36-bdf9-baae1d6afbf9","Type":"ContainerDied","Data":"0be1975b2c5567c1abb0abc2daeb6dae6f164909f00b321aef77fe7b61a22ccf"} Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.720179 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-rxg9k"] Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.725496 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.728655 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.736277 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-rxg9k"] Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.844478 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.844535 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-combined-ca-bundle\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.844587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-scripts\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.844621 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data-merged\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.947062 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.947117 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-combined-ca-bundle\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.947157 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-scripts\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.947182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data-merged\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.947725 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data-merged\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.969361 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-scripts\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.969971 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:53 crc kubenswrapper[4772]: I1122 12:16:53.974290 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-combined-ca-bundle\") pod \"octavia-db-sync-rxg9k\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:54 crc kubenswrapper[4772]: I1122 12:16:54.054453 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:16:54 crc kubenswrapper[4772]: I1122 12:16:54.599824 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-rxg9k"] Nov 22 12:16:54 crc kubenswrapper[4772]: W1122 12:16:54.609900 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81eb4121_4766_421d_9d21_8aa5a8801dc8.slice/crio-81ba4bf3a57a32084328bef0dc3c987c8851df56e22e76109adaf1636119aaaf WatchSource:0}: Error finding container 81ba4bf3a57a32084328bef0dc3c987c8851df56e22e76109adaf1636119aaaf: Status 404 returned error can't find the container with id 81ba4bf3a57a32084328bef0dc3c987c8851df56e22e76109adaf1636119aaaf Nov 22 12:16:54 crc kubenswrapper[4772]: I1122 12:16:54.964451 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rxg9k" event={"ID":"81eb4121-4766-421d-9d21-8aa5a8801dc8","Type":"ContainerStarted","Data":"81ba4bf3a57a32084328bef0dc3c987c8851df56e22e76109adaf1636119aaaf"} Nov 22 12:16:59 crc kubenswrapper[4772]: I1122 12:16:59.933329 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-mlsks"] Nov 22 12:16:59 crc kubenswrapper[4772]: I1122 12:16:59.937223 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:16:59 crc kubenswrapper[4772]: I1122 12:16:59.945258 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Nov 22 12:16:59 crc kubenswrapper[4772]: I1122 12:16:59.945377 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Nov 22 12:16:59 crc kubenswrapper[4772]: I1122 12:16:59.945397 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Nov 22 12:16:59 crc kubenswrapper[4772]: I1122 12:16:59.953070 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-mlsks"] Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.013698 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-combined-ca-bundle\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.014349 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c03eae43-9823-42ce-b3f3-2f437e08fd71-hm-ports\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.014429 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-scripts\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.014500 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-amphora-certs\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.014592 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-config-data\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.014780 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c03eae43-9823-42ce-b3f3-2f437e08fd71-config-data-merged\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.120114 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-combined-ca-bundle\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.120241 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c03eae43-9823-42ce-b3f3-2f437e08fd71-hm-ports\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.120329 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-scripts\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.120407 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-amphora-certs\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.120444 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-config-data\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.120554 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c03eae43-9823-42ce-b3f3-2f437e08fd71-config-data-merged\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.121489 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c03eae43-9823-42ce-b3f3-2f437e08fd71-config-data-merged\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.122378 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c03eae43-9823-42ce-b3f3-2f437e08fd71-hm-ports\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.129088 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-combined-ca-bundle\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.129119 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-config-data\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.129144 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-scripts\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.129795 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c03eae43-9823-42ce-b3f3-2f437e08fd71-amphora-certs\") pod \"octavia-housekeeping-mlsks\" (UID: \"c03eae43-9823-42ce-b3f3-2f437e08fd71\") " pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:00 crc kubenswrapper[4772]: I1122 12:17:00.263772 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:01 crc kubenswrapper[4772]: I1122 12:17:01.425770 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:17:01 crc kubenswrapper[4772]: E1122 12:17:01.426577 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:17:01 crc kubenswrapper[4772]: I1122 12:17:01.962960 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-7w6cx"] Nov 22 12:17:01 crc kubenswrapper[4772]: I1122 12:17:01.966304 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:01 crc kubenswrapper[4772]: I1122 12:17:01.973334 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Nov 22 12:17:01 crc kubenswrapper[4772]: I1122 12:17:01.973694 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Nov 22 12:17:01 crc kubenswrapper[4772]: I1122 12:17:01.995864 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-7w6cx"] Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.075107 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-combined-ca-bundle\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.075727 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-config-data-merged\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.075768 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-config-data\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.075841 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-hm-ports\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.075911 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-amphora-certs\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.075957 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-scripts\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.177630 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-amphora-certs\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.177680 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-scripts\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.177745 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-combined-ca-bundle\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.178728 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-config-data-merged\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.178962 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-config-data-merged\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.179021 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-config-data\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.179092 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-hm-ports\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.180222 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-hm-ports\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.189025 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-config-data\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.189246 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-scripts\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.193598 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-combined-ca-bundle\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.194500 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/281d85f9-b24f-4264-a5a1-6cf0f9d24f18-amphora-certs\") pod \"octavia-worker-7w6cx\" (UID: \"281d85f9-b24f-4264-a5a1-6cf0f9d24f18\") " pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.357404 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:02 crc kubenswrapper[4772]: E1122 12:17:02.542281 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81eb4121_4766_421d_9d21_8aa5a8801dc8.slice/crio-conmon-8dd4aa32c049b4473d15bee755251e231848ef358d0fca8576bc90f73db5e6e4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81eb4121_4766_421d_9d21_8aa5a8801dc8.slice/crio-8dd4aa32c049b4473d15bee755251e231848ef358d0fca8576bc90f73db5e6e4.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.596416 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-mlsks"] Nov 22 12:17:02 crc kubenswrapper[4772]: I1122 12:17:02.973648 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-7w6cx"] Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.087980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-2zvhj" event={"ID":"623bfb17-7669-4a36-bdf9-baae1d6afbf9","Type":"ContainerStarted","Data":"481922d87809dbe8f67055f7f509b500b8baca89a98d5187b3c27b48bd6aaf58"} Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.088821 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.091642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-mlsks" event={"ID":"c03eae43-9823-42ce-b3f3-2f437e08fd71","Type":"ContainerStarted","Data":"614dd3b4d278e99528bd01ba8883d86f08c63069252e77b4839f9e248a8e9889"} Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.094346 4772 generic.go:334] "Generic (PLEG): container finished" podID="81eb4121-4766-421d-9d21-8aa5a8801dc8" containerID="8dd4aa32c049b4473d15bee755251e231848ef358d0fca8576bc90f73db5e6e4" exitCode=0 Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.094443 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rxg9k" event={"ID":"81eb4121-4766-421d-9d21-8aa5a8801dc8","Type":"ContainerDied","Data":"8dd4aa32c049b4473d15bee755251e231848ef358d0fca8576bc90f73db5e6e4"} Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.097791 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vwllg" event={"ID":"695c0702-b8d4-4bdf-8268-3e5898f3d789","Type":"ContainerStarted","Data":"f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1"} Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.099393 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-7w6cx" event={"ID":"281d85f9-b24f-4264-a5a1-6cf0f9d24f18","Type":"ContainerStarted","Data":"a70afca130683e2b7c1b13d63caf7f1683509a49e43b894ecf35bf86fc8565c0"} Nov 22 12:17:03 crc kubenswrapper[4772]: I1122 12:17:03.115005 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-2zvhj" podStartSLOduration=2.420710562 podStartE2EDuration="16.114968421s" podCreationTimestamp="2025-11-22 12:16:47 +0000 UTC" firstStartedPulling="2025-11-22 12:16:48.166434284 +0000 UTC m=+5928.405878778" lastFinishedPulling="2025-11-22 12:17:01.860692133 +0000 UTC m=+5942.100136637" observedRunningTime="2025-11-22 12:17:03.113273489 +0000 UTC m=+5943.352717983" watchObservedRunningTime="2025-11-22 12:17:03.114968421 +0000 UTC m=+5943.354412955" Nov 22 12:17:04 crc kubenswrapper[4772]: I1122 12:17:04.109900 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rxg9k" event={"ID":"81eb4121-4766-421d-9d21-8aa5a8801dc8","Type":"ContainerStarted","Data":"8defb1f89090c0e2ebeb6977b0df5f2356761ddd4bf318252a98151a2143d55d"} Nov 22 12:17:04 crc kubenswrapper[4772]: I1122 12:17:04.113770 4772 generic.go:334] "Generic (PLEG): container finished" podID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerID="f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1" exitCode=0 Nov 22 12:17:04 crc kubenswrapper[4772]: I1122 12:17:04.114008 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vwllg" event={"ID":"695c0702-b8d4-4bdf-8268-3e5898f3d789","Type":"ContainerDied","Data":"f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1"} Nov 22 12:17:04 crc kubenswrapper[4772]: I1122 12:17:04.137659 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-rxg9k" podStartSLOduration=11.137639204 podStartE2EDuration="11.137639204s" podCreationTimestamp="2025-11-22 12:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:17:04.125674466 +0000 UTC m=+5944.365119060" watchObservedRunningTime="2025-11-22 12:17:04.137639204 +0000 UTC m=+5944.377083698" Nov 22 12:17:07 crc kubenswrapper[4772]: I1122 12:17:07.153152 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-mlsks" event={"ID":"c03eae43-9823-42ce-b3f3-2f437e08fd71","Type":"ContainerStarted","Data":"d9f4a9321446ecb41cc17cafe228f3eb7ac4791780e00c2154979f1c3c7069d0"} Nov 22 12:17:07 crc kubenswrapper[4772]: I1122 12:17:07.157554 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-7w6cx" event={"ID":"281d85f9-b24f-4264-a5a1-6cf0f9d24f18","Type":"ContainerStarted","Data":"18d2c3644d1ae7bc39a0596206622c6a570977eb18a98af29b63d750b89475db"} Nov 22 12:17:09 crc kubenswrapper[4772]: I1122 12:17:09.178745 4772 generic.go:334] "Generic (PLEG): container finished" podID="c03eae43-9823-42ce-b3f3-2f437e08fd71" containerID="d9f4a9321446ecb41cc17cafe228f3eb7ac4791780e00c2154979f1c3c7069d0" exitCode=0 Nov 22 12:17:09 crc kubenswrapper[4772]: I1122 12:17:09.178842 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-mlsks" event={"ID":"c03eae43-9823-42ce-b3f3-2f437e08fd71","Type":"ContainerDied","Data":"d9f4a9321446ecb41cc17cafe228f3eb7ac4791780e00c2154979f1c3c7069d0"} Nov 22 12:17:09 crc kubenswrapper[4772]: I1122 12:17:09.181642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vwllg" event={"ID":"695c0702-b8d4-4bdf-8268-3e5898f3d789","Type":"ContainerStarted","Data":"c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3"} Nov 22 12:17:09 crc kubenswrapper[4772]: I1122 12:17:09.183082 4772 generic.go:334] "Generic (PLEG): container finished" podID="281d85f9-b24f-4264-a5a1-6cf0f9d24f18" containerID="18d2c3644d1ae7bc39a0596206622c6a570977eb18a98af29b63d750b89475db" exitCode=0 Nov 22 12:17:09 crc kubenswrapper[4772]: I1122 12:17:09.183127 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-7w6cx" event={"ID":"281d85f9-b24f-4264-a5a1-6cf0f9d24f18","Type":"ContainerDied","Data":"18d2c3644d1ae7bc39a0596206622c6a570977eb18a98af29b63d750b89475db"} Nov 22 12:17:09 crc kubenswrapper[4772]: I1122 12:17:09.250541 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-vwllg" podStartSLOduration=2.831763708 podStartE2EDuration="22.250516454s" podCreationTimestamp="2025-11-22 12:16:47 +0000 UTC" firstStartedPulling="2025-11-22 12:16:48.863332736 +0000 UTC m=+5929.102777230" lastFinishedPulling="2025-11-22 12:17:08.282085482 +0000 UTC m=+5948.521529976" observedRunningTime="2025-11-22 12:17:09.211965615 +0000 UTC m=+5949.451410129" watchObservedRunningTime="2025-11-22 12:17:09.250516454 +0000 UTC m=+5949.489960968" Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.198934 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-mlsks" event={"ID":"c03eae43-9823-42ce-b3f3-2f437e08fd71","Type":"ContainerStarted","Data":"1453035bd0d6a26533f1e31f72c706bea3fb882d2c105509f9d3e732f1db437e"} Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.200020 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.201077 4772 generic.go:334] "Generic (PLEG): container finished" podID="81eb4121-4766-421d-9d21-8aa5a8801dc8" containerID="8defb1f89090c0e2ebeb6977b0df5f2356761ddd4bf318252a98151a2143d55d" exitCode=0 Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.201139 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rxg9k" event={"ID":"81eb4121-4766-421d-9d21-8aa5a8801dc8","Type":"ContainerDied","Data":"8defb1f89090c0e2ebeb6977b0df5f2356761ddd4bf318252a98151a2143d55d"} Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.204488 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-7w6cx" event={"ID":"281d85f9-b24f-4264-a5a1-6cf0f9d24f18","Type":"ContainerStarted","Data":"1076ed34402ff3e95e1991b296b37ea89f46d95ad5fb6e2119bd8284a59a8da3"} Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.204818 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.227805 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-mlsks" podStartSLOduration=8.481903028 podStartE2EDuration="11.227784726s" podCreationTimestamp="2025-11-22 12:16:59 +0000 UTC" firstStartedPulling="2025-11-22 12:17:02.607006984 +0000 UTC m=+5942.846451478" lastFinishedPulling="2025-11-22 12:17:05.352888682 +0000 UTC m=+5945.592333176" observedRunningTime="2025-11-22 12:17:10.215282034 +0000 UTC m=+5950.454726548" watchObservedRunningTime="2025-11-22 12:17:10.227784726 +0000 UTC m=+5950.467229240" Nov 22 12:17:10 crc kubenswrapper[4772]: I1122 12:17:10.240829 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-7w6cx" podStartSLOduration=6.788909212 podStartE2EDuration="9.24080621s" podCreationTimestamp="2025-11-22 12:17:01 +0000 UTC" firstStartedPulling="2025-11-22 12:17:02.988601625 +0000 UTC m=+5943.228046119" lastFinishedPulling="2025-11-22 12:17:05.440498623 +0000 UTC m=+5945.679943117" observedRunningTime="2025-11-22 12:17:10.232617346 +0000 UTC m=+5950.472061860" watchObservedRunningTime="2025-11-22 12:17:10.24080621 +0000 UTC m=+5950.480250714" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.610845 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.792224 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data\") pod \"81eb4121-4766-421d-9d21-8aa5a8801dc8\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.792421 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-combined-ca-bundle\") pod \"81eb4121-4766-421d-9d21-8aa5a8801dc8\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.792474 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-scripts\") pod \"81eb4121-4766-421d-9d21-8aa5a8801dc8\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.792553 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data-merged\") pod \"81eb4121-4766-421d-9d21-8aa5a8801dc8\" (UID: \"81eb4121-4766-421d-9d21-8aa5a8801dc8\") " Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.806781 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-scripts" (OuterVolumeSpecName: "scripts") pod "81eb4121-4766-421d-9d21-8aa5a8801dc8" (UID: "81eb4121-4766-421d-9d21-8aa5a8801dc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.810149 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data" (OuterVolumeSpecName: "config-data") pod "81eb4121-4766-421d-9d21-8aa5a8801dc8" (UID: "81eb4121-4766-421d-9d21-8aa5a8801dc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.820847 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "81eb4121-4766-421d-9d21-8aa5a8801dc8" (UID: "81eb4121-4766-421d-9d21-8aa5a8801dc8"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.825131 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81eb4121-4766-421d-9d21-8aa5a8801dc8" (UID: "81eb4121-4766-421d-9d21-8aa5a8801dc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.895370 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.895422 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.895446 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81eb4121-4766-421d-9d21-8aa5a8801dc8-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:17:11 crc kubenswrapper[4772]: I1122 12:17:11.895467 4772 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/81eb4121-4766-421d-9d21-8aa5a8801dc8-config-data-merged\") on node \"crc\" DevicePath \"\"" Nov 22 12:17:12 crc kubenswrapper[4772]: I1122 12:17:12.236635 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rxg9k" event={"ID":"81eb4121-4766-421d-9d21-8aa5a8801dc8","Type":"ContainerDied","Data":"81ba4bf3a57a32084328bef0dc3c987c8851df56e22e76109adaf1636119aaaf"} Nov 22 12:17:12 crc kubenswrapper[4772]: I1122 12:17:12.237233 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ba4bf3a57a32084328bef0dc3c987c8851df56e22e76109adaf1636119aaaf" Nov 22 12:17:12 crc kubenswrapper[4772]: I1122 12:17:12.236741 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rxg9k" Nov 22 12:17:13 crc kubenswrapper[4772]: I1122 12:17:13.413970 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:17:13 crc kubenswrapper[4772]: E1122 12:17:13.414405 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:17:15 crc kubenswrapper[4772]: I1122 12:17:15.297251 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-mlsks" Nov 22 12:17:17 crc kubenswrapper[4772]: I1122 12:17:17.388734 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-7w6cx" Nov 22 12:17:17 crc kubenswrapper[4772]: I1122 12:17:17.633279 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-2zvhj" Nov 22 12:17:25 crc kubenswrapper[4772]: I1122 12:17:25.040890 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7k2t5"] Nov 22 12:17:25 crc kubenswrapper[4772]: I1122 12:17:25.052577 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7k2t5"] Nov 22 12:17:25 crc kubenswrapper[4772]: I1122 12:17:25.428364 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba87c6f-733e-4710-8b04-8267444835a7" path="/var/lib/kubelet/pods/0ba87c6f-733e-4710-8b04-8267444835a7/volumes" Nov 22 12:17:28 crc kubenswrapper[4772]: I1122 12:17:28.413297 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:17:28 crc kubenswrapper[4772]: E1122 12:17:28.414346 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:17:35 crc kubenswrapper[4772]: I1122 12:17:35.938609 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vwllg"] Nov 22 12:17:35 crc kubenswrapper[4772]: I1122 12:17:35.939476 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-vwllg" podUID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerName="octavia-amphora-httpd" containerID="cri-o://c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3" gracePeriod=30 Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.029321 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-06a5-account-create-pvmft"] Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.042743 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-06a5-account-create-pvmft"] Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.495762 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.523596 4772 generic.go:334] "Generic (PLEG): container finished" podID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerID="c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3" exitCode=0 Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.523652 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vwllg" event={"ID":"695c0702-b8d4-4bdf-8268-3e5898f3d789","Type":"ContainerDied","Data":"c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3"} Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.523696 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vwllg" event={"ID":"695c0702-b8d4-4bdf-8268-3e5898f3d789","Type":"ContainerDied","Data":"8a1b8f455d4951474d927b43b14b135ee1f709c6de368fe5ee87324d84d03d1b"} Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.523723 4772 scope.go:117] "RemoveContainer" containerID="c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.523903 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vwllg" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.554391 4772 scope.go:117] "RemoveContainer" containerID="f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.592692 4772 scope.go:117] "RemoveContainer" containerID="c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3" Nov 22 12:17:36 crc kubenswrapper[4772]: E1122 12:17:36.593988 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3\": container with ID starting with c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3 not found: ID does not exist" containerID="c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.594020 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3"} err="failed to get container status \"c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3\": rpc error: code = NotFound desc = could not find container \"c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3\": container with ID starting with c248d3292f5bfeb3da0b1b2761d7c89ecaa0a6c5c9ca1c8e64fdfadbb0f668b3 not found: ID does not exist" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.594042 4772 scope.go:117] "RemoveContainer" containerID="f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1" Nov 22 12:17:36 crc kubenswrapper[4772]: E1122 12:17:36.594643 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1\": container with ID starting with f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1 not found: ID does not exist" containerID="f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.594687 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1"} err="failed to get container status \"f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1\": rpc error: code = NotFound desc = could not find container \"f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1\": container with ID starting with f0d82a4378f9891e0a269e9293bbf16bc3979f17b4b96f725be080e62c9f49e1 not found: ID does not exist" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.605259 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/695c0702-b8d4-4bdf-8268-3e5898f3d789-amphora-image\") pod \"695c0702-b8d4-4bdf-8268-3e5898f3d789\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.605508 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/695c0702-b8d4-4bdf-8268-3e5898f3d789-httpd-config\") pod \"695c0702-b8d4-4bdf-8268-3e5898f3d789\" (UID: \"695c0702-b8d4-4bdf-8268-3e5898f3d789\") " Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.633655 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695c0702-b8d4-4bdf-8268-3e5898f3d789-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "695c0702-b8d4-4bdf-8268-3e5898f3d789" (UID: "695c0702-b8d4-4bdf-8268-3e5898f3d789"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.694471 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695c0702-b8d4-4bdf-8268-3e5898f3d789-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "695c0702-b8d4-4bdf-8268-3e5898f3d789" (UID: "695c0702-b8d4-4bdf-8268-3e5898f3d789"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.707432 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/695c0702-b8d4-4bdf-8268-3e5898f3d789-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.707465 4772 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/695c0702-b8d4-4bdf-8268-3e5898f3d789-amphora-image\") on node \"crc\" DevicePath \"\"" Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.859912 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vwllg"] Nov 22 12:17:36 crc kubenswrapper[4772]: I1122 12:17:36.871209 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vwllg"] Nov 22 12:17:37 crc kubenswrapper[4772]: I1122 12:17:37.427199 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695c0702-b8d4-4bdf-8268-3e5898f3d789" path="/var/lib/kubelet/pods/695c0702-b8d4-4bdf-8268-3e5898f3d789/volumes" Nov 22 12:17:37 crc kubenswrapper[4772]: I1122 12:17:37.428372 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18b01d3-2ec1-46a0-b5ee-bc9addebed19" path="/var/lib/kubelet/pods/e18b01d3-2ec1-46a0-b5ee-bc9addebed19/volumes" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.721610 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-tc6tq"] Nov 22 12:17:39 crc kubenswrapper[4772]: E1122 12:17:39.722430 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81eb4121-4766-421d-9d21-8aa5a8801dc8" containerName="octavia-db-sync" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.722444 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="81eb4121-4766-421d-9d21-8aa5a8801dc8" containerName="octavia-db-sync" Nov 22 12:17:39 crc kubenswrapper[4772]: E1122 12:17:39.722479 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerName="init" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.722486 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerName="init" Nov 22 12:17:39 crc kubenswrapper[4772]: E1122 12:17:39.722503 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81eb4121-4766-421d-9d21-8aa5a8801dc8" containerName="init" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.722509 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="81eb4121-4766-421d-9d21-8aa5a8801dc8" containerName="init" Nov 22 12:17:39 crc kubenswrapper[4772]: E1122 12:17:39.722524 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerName="octavia-amphora-httpd" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.722530 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerName="octavia-amphora-httpd" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.722738 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="81eb4121-4766-421d-9d21-8aa5a8801dc8" containerName="octavia-db-sync" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.722764 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="695c0702-b8d4-4bdf-8268-3e5898f3d789" containerName="octavia-amphora-httpd" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.723943 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.728521 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.738611 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-tc6tq"] Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.911777 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ccbe9325-5948-44bc-9bd6-55892b81e85e-amphora-image\") pod \"octavia-image-upload-59f8cff499-tc6tq\" (UID: \"ccbe9325-5948-44bc-9bd6-55892b81e85e\") " pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:39 crc kubenswrapper[4772]: I1122 12:17:39.912087 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccbe9325-5948-44bc-9bd6-55892b81e85e-httpd-config\") pod \"octavia-image-upload-59f8cff499-tc6tq\" (UID: \"ccbe9325-5948-44bc-9bd6-55892b81e85e\") " pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:40 crc kubenswrapper[4772]: I1122 12:17:40.013759 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ccbe9325-5948-44bc-9bd6-55892b81e85e-amphora-image\") pod \"octavia-image-upload-59f8cff499-tc6tq\" (UID: \"ccbe9325-5948-44bc-9bd6-55892b81e85e\") " pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:40 crc kubenswrapper[4772]: I1122 12:17:40.013815 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccbe9325-5948-44bc-9bd6-55892b81e85e-httpd-config\") pod \"octavia-image-upload-59f8cff499-tc6tq\" (UID: \"ccbe9325-5948-44bc-9bd6-55892b81e85e\") " pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:40 crc kubenswrapper[4772]: I1122 12:17:40.015931 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ccbe9325-5948-44bc-9bd6-55892b81e85e-amphora-image\") pod \"octavia-image-upload-59f8cff499-tc6tq\" (UID: \"ccbe9325-5948-44bc-9bd6-55892b81e85e\") " pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:40 crc kubenswrapper[4772]: I1122 12:17:40.022927 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccbe9325-5948-44bc-9bd6-55892b81e85e-httpd-config\") pod \"octavia-image-upload-59f8cff499-tc6tq\" (UID: \"ccbe9325-5948-44bc-9bd6-55892b81e85e\") " pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:40 crc kubenswrapper[4772]: I1122 12:17:40.052063 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-tc6tq" Nov 22 12:17:40 crc kubenswrapper[4772]: I1122 12:17:40.414087 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:17:40 crc kubenswrapper[4772]: E1122 12:17:40.414633 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:17:40 crc kubenswrapper[4772]: I1122 12:17:40.723764 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-tc6tq"] Nov 22 12:17:41 crc kubenswrapper[4772]: I1122 12:17:41.027594 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6znrw"] Nov 22 12:17:41 crc kubenswrapper[4772]: I1122 12:17:41.037976 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6znrw"] Nov 22 12:17:41 crc kubenswrapper[4772]: I1122 12:17:41.424845 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e758ed-9d0e-4359-bc0b-902316c12923" path="/var/lib/kubelet/pods/22e758ed-9d0e-4359-bc0b-902316c12923/volumes" Nov 22 12:17:41 crc kubenswrapper[4772]: I1122 12:17:41.580772 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tc6tq" event={"ID":"ccbe9325-5948-44bc-9bd6-55892b81e85e","Type":"ContainerStarted","Data":"5f06e63c8b61b39c07c3a14555d931af71951c860aea0b42740f057005327a75"} Nov 22 12:17:41 crc kubenswrapper[4772]: I1122 12:17:41.580825 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tc6tq" event={"ID":"ccbe9325-5948-44bc-9bd6-55892b81e85e","Type":"ContainerStarted","Data":"0a38a4ef7ce9f4805d13c911814abf210f3454abfaea77b1e8d0db3091a0145e"} Nov 22 12:17:42 crc kubenswrapper[4772]: I1122 12:17:42.594065 4772 generic.go:334] "Generic (PLEG): container finished" podID="ccbe9325-5948-44bc-9bd6-55892b81e85e" containerID="5f06e63c8b61b39c07c3a14555d931af71951c860aea0b42740f057005327a75" exitCode=0 Nov 22 12:17:42 crc kubenswrapper[4772]: I1122 12:17:42.594116 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tc6tq" event={"ID":"ccbe9325-5948-44bc-9bd6-55892b81e85e","Type":"ContainerDied","Data":"5f06e63c8b61b39c07c3a14555d931af71951c860aea0b42740f057005327a75"} Nov 22 12:17:44 crc kubenswrapper[4772]: I1122 12:17:44.617100 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-tc6tq" event={"ID":"ccbe9325-5948-44bc-9bd6-55892b81e85e","Type":"ContainerStarted","Data":"57f3b7538b55247b7bdd867f796e03e38698efc536dcf55c337824abd5f949ea"} Nov 22 12:17:44 crc kubenswrapper[4772]: I1122 12:17:44.641623 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-tc6tq" podStartSLOduration=2.488551903 podStartE2EDuration="5.641598677s" podCreationTimestamp="2025-11-22 12:17:39 +0000 UTC" firstStartedPulling="2025-11-22 12:17:40.732675053 +0000 UTC m=+5980.972119547" lastFinishedPulling="2025-11-22 12:17:43.885721817 +0000 UTC m=+5984.125166321" observedRunningTime="2025-11-22 12:17:44.635843844 +0000 UTC m=+5984.875288338" watchObservedRunningTime="2025-11-22 12:17:44.641598677 +0000 UTC m=+5984.881043171" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.584105 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-gdsq4"] Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.587595 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.590735 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.591808 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.597580 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-gdsq4"] Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.682110 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7a997166-9f14-4bb7-a94a-427e72fb64d2-hm-ports\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.682229 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-scripts\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.682262 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7a997166-9f14-4bb7-a94a-427e72fb64d2-config-data-merged\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.682313 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-combined-ca-bundle\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.682356 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-config-data\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.682614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-amphora-certs\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.784946 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-amphora-certs\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.785048 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7a997166-9f14-4bb7-a94a-427e72fb64d2-hm-ports\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.785163 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-scripts\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.785201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7a997166-9f14-4bb7-a94a-427e72fb64d2-config-data-merged\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.785250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-combined-ca-bundle\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.785293 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-config-data\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.786239 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7a997166-9f14-4bb7-a94a-427e72fb64d2-config-data-merged\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.786524 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7a997166-9f14-4bb7-a94a-427e72fb64d2-hm-ports\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.792216 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-amphora-certs\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.792315 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-config-data\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.793057 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-scripts\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.807955 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a997166-9f14-4bb7-a94a-427e72fb64d2-combined-ca-bundle\") pod \"octavia-healthmanager-gdsq4\" (UID: \"7a997166-9f14-4bb7-a94a-427e72fb64d2\") " pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:48 crc kubenswrapper[4772]: I1122 12:17:48.913375 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:49 crc kubenswrapper[4772]: I1122 12:17:49.847726 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-gdsq4"] Nov 22 12:17:50 crc kubenswrapper[4772]: I1122 12:17:50.672573 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-gdsq4" event={"ID":"7a997166-9f14-4bb7-a94a-427e72fb64d2","Type":"ContainerStarted","Data":"8583b8cac7c72a0ac1c0eed24620aca40d324cae70d290a114cbccf7bf021985"} Nov 22 12:17:50 crc kubenswrapper[4772]: I1122 12:17:50.672991 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-gdsq4" event={"ID":"7a997166-9f14-4bb7-a94a-427e72fb64d2","Type":"ContainerStarted","Data":"942332e2d58ece5363c454ba179f874f0f0e1668e0d3683f6e992111be26cf1f"} Nov 22 12:17:52 crc kubenswrapper[4772]: I1122 12:17:52.416768 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:17:52 crc kubenswrapper[4772]: E1122 12:17:52.417448 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:17:52 crc kubenswrapper[4772]: I1122 12:17:52.693786 4772 generic.go:334] "Generic (PLEG): container finished" podID="7a997166-9f14-4bb7-a94a-427e72fb64d2" containerID="8583b8cac7c72a0ac1c0eed24620aca40d324cae70d290a114cbccf7bf021985" exitCode=0 Nov 22 12:17:52 crc kubenswrapper[4772]: I1122 12:17:52.693860 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-gdsq4" event={"ID":"7a997166-9f14-4bb7-a94a-427e72fb64d2","Type":"ContainerDied","Data":"8583b8cac7c72a0ac1c0eed24620aca40d324cae70d290a114cbccf7bf021985"} Nov 22 12:17:53 crc kubenswrapper[4772]: I1122 12:17:53.713782 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-gdsq4" event={"ID":"7a997166-9f14-4bb7-a94a-427e72fb64d2","Type":"ContainerStarted","Data":"6f8399a4137ac5592f5dfe36c8b02df039bbaf5f184e47716da934579396fc67"} Nov 22 12:17:53 crc kubenswrapper[4772]: I1122 12:17:53.715398 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:17:53 crc kubenswrapper[4772]: I1122 12:17:53.743967 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-gdsq4" podStartSLOduration=5.743947846 podStartE2EDuration="5.743947846s" podCreationTimestamp="2025-11-22 12:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:17:53.739181877 +0000 UTC m=+5993.978626371" watchObservedRunningTime="2025-11-22 12:17:53.743947846 +0000 UTC m=+5993.983392340" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.032170 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrlhb"] Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.035493 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.040499 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrlhb"] Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.194169 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-catalog-content\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.194411 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5pk\" (UniqueName: \"kubernetes.io/projected/efce4fef-1a79-484b-9d48-2b7531f3dedc-kube-api-access-9j5pk\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.194548 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-utilities\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.297538 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j5pk\" (UniqueName: \"kubernetes.io/projected/efce4fef-1a79-484b-9d48-2b7531f3dedc-kube-api-access-9j5pk\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.297791 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-utilities\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.297842 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-catalog-content\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.298765 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-catalog-content\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.299893 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-utilities\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.357501 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j5pk\" (UniqueName: \"kubernetes.io/projected/efce4fef-1a79-484b-9d48-2b7531f3dedc-kube-api-access-9j5pk\") pod \"community-operators-xrlhb\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.365977 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:17:56 crc kubenswrapper[4772]: W1122 12:17:56.931278 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefce4fef_1a79_484b_9d48_2b7531f3dedc.slice/crio-fb51949cd04f3ef2cfd0b47227b328d41435be03ef83f14469a486e27ba9565e WatchSource:0}: Error finding container fb51949cd04f3ef2cfd0b47227b328d41435be03ef83f14469a486e27ba9565e: Status 404 returned error can't find the container with id fb51949cd04f3ef2cfd0b47227b328d41435be03ef83f14469a486e27ba9565e Nov 22 12:17:56 crc kubenswrapper[4772]: I1122 12:17:56.941875 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrlhb"] Nov 22 12:17:57 crc kubenswrapper[4772]: I1122 12:17:57.764164 4772 generic.go:334] "Generic (PLEG): container finished" podID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerID="252e3c7b62c35ed568d540d54d518537cf8dfb1c55210702efd87184e578eba2" exitCode=0 Nov 22 12:17:57 crc kubenswrapper[4772]: I1122 12:17:57.764254 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrlhb" event={"ID":"efce4fef-1a79-484b-9d48-2b7531f3dedc","Type":"ContainerDied","Data":"252e3c7b62c35ed568d540d54d518537cf8dfb1c55210702efd87184e578eba2"} Nov 22 12:17:57 crc kubenswrapper[4772]: I1122 12:17:57.764343 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrlhb" event={"ID":"efce4fef-1a79-484b-9d48-2b7531f3dedc","Type":"ContainerStarted","Data":"fb51949cd04f3ef2cfd0b47227b328d41435be03ef83f14469a486e27ba9565e"} Nov 22 12:17:58 crc kubenswrapper[4772]: I1122 12:17:58.779762 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrlhb" event={"ID":"efce4fef-1a79-484b-9d48-2b7531f3dedc","Type":"ContainerStarted","Data":"7037ffbf34fc20a5a1ced3963f57267d58fede468530d1424109227af85acb60"} Nov 22 12:17:59 crc kubenswrapper[4772]: I1122 12:17:59.822631 4772 generic.go:334] "Generic (PLEG): container finished" podID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerID="7037ffbf34fc20a5a1ced3963f57267d58fede468530d1424109227af85acb60" exitCode=0 Nov 22 12:17:59 crc kubenswrapper[4772]: I1122 12:17:59.822724 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrlhb" event={"ID":"efce4fef-1a79-484b-9d48-2b7531f3dedc","Type":"ContainerDied","Data":"7037ffbf34fc20a5a1ced3963f57267d58fede468530d1424109227af85acb60"} Nov 22 12:18:00 crc kubenswrapper[4772]: I1122 12:18:00.841566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrlhb" event={"ID":"efce4fef-1a79-484b-9d48-2b7531f3dedc","Type":"ContainerStarted","Data":"f2892da930cfc1bbef57ac594ee9fb9cc5ce9ddef5da9f4a991c9977900c60a1"} Nov 22 12:18:00 crc kubenswrapper[4772]: I1122 12:18:00.872473 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrlhb" podStartSLOduration=2.4082869479999998 podStartE2EDuration="4.87245188s" podCreationTimestamp="2025-11-22 12:17:56 +0000 UTC" firstStartedPulling="2025-11-22 12:17:57.76853193 +0000 UTC m=+5998.007976424" lastFinishedPulling="2025-11-22 12:18:00.232696852 +0000 UTC m=+6000.472141356" observedRunningTime="2025-11-22 12:18:00.870100072 +0000 UTC m=+6001.109544576" watchObservedRunningTime="2025-11-22 12:18:00.87245188 +0000 UTC m=+6001.111896374" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.026824 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dj7c8"] Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.030084 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.057590 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dj7c8"] Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.148611 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-utilities\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.148708 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wqb\" (UniqueName: \"kubernetes.io/projected/7c3d9ff1-b290-4488-bb4a-595852b508b2-kube-api-access-q6wqb\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.148797 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-catalog-content\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.251815 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6wqb\" (UniqueName: \"kubernetes.io/projected/7c3d9ff1-b290-4488-bb4a-595852b508b2-kube-api-access-q6wqb\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.253740 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-catalog-content\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.254034 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-utilities\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.254464 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-catalog-content\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.254905 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-utilities\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.282104 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6wqb\" (UniqueName: \"kubernetes.io/projected/7c3d9ff1-b290-4488-bb4a-595852b508b2-kube-api-access-q6wqb\") pod \"redhat-operators-dj7c8\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.354009 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:01 crc kubenswrapper[4772]: I1122 12:18:01.869484 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dj7c8"] Nov 22 12:18:02 crc kubenswrapper[4772]: I1122 12:18:02.865781 4772 generic.go:334] "Generic (PLEG): container finished" podID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerID="7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c" exitCode=0 Nov 22 12:18:02 crc kubenswrapper[4772]: I1122 12:18:02.866015 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj7c8" event={"ID":"7c3d9ff1-b290-4488-bb4a-595852b508b2","Type":"ContainerDied","Data":"7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c"} Nov 22 12:18:02 crc kubenswrapper[4772]: I1122 12:18:02.866355 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj7c8" event={"ID":"7c3d9ff1-b290-4488-bb4a-595852b508b2","Type":"ContainerStarted","Data":"f40b98d38a4bf061025a085aa8a1bdc7c5820cc618d0151c108e467218719744"} Nov 22 12:18:03 crc kubenswrapper[4772]: I1122 12:18:03.882938 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj7c8" event={"ID":"7c3d9ff1-b290-4488-bb4a-595852b508b2","Type":"ContainerStarted","Data":"089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c"} Nov 22 12:18:03 crc kubenswrapper[4772]: I1122 12:18:03.971309 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-gdsq4" Nov 22 12:18:04 crc kubenswrapper[4772]: I1122 12:18:04.891924 4772 generic.go:334] "Generic (PLEG): container finished" podID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerID="089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c" exitCode=0 Nov 22 12:18:04 crc kubenswrapper[4772]: I1122 12:18:04.891973 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj7c8" event={"ID":"7c3d9ff1-b290-4488-bb4a-595852b508b2","Type":"ContainerDied","Data":"089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c"} Nov 22 12:18:05 crc kubenswrapper[4772]: I1122 12:18:05.413849 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:18:05 crc kubenswrapper[4772]: E1122 12:18:05.414566 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:18:05 crc kubenswrapper[4772]: I1122 12:18:05.902620 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj7c8" event={"ID":"7c3d9ff1-b290-4488-bb4a-595852b508b2","Type":"ContainerStarted","Data":"728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb"} Nov 22 12:18:05 crc kubenswrapper[4772]: I1122 12:18:05.925162 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dj7c8" podStartSLOduration=3.500506333 podStartE2EDuration="5.925131841s" podCreationTimestamp="2025-11-22 12:18:00 +0000 UTC" firstStartedPulling="2025-11-22 12:18:02.867484562 +0000 UTC m=+6003.106929096" lastFinishedPulling="2025-11-22 12:18:05.29211011 +0000 UTC m=+6005.531554604" observedRunningTime="2025-11-22 12:18:05.918970458 +0000 UTC m=+6006.158414952" watchObservedRunningTime="2025-11-22 12:18:05.925131841 +0000 UTC m=+6006.164576335" Nov 22 12:18:06 crc kubenswrapper[4772]: I1122 12:18:06.366114 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:18:06 crc kubenswrapper[4772]: I1122 12:18:06.366188 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:18:06 crc kubenswrapper[4772]: I1122 12:18:06.415670 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:18:06 crc kubenswrapper[4772]: I1122 12:18:06.988282 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:18:09 crc kubenswrapper[4772]: I1122 12:18:09.615782 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrlhb"] Nov 22 12:18:09 crc kubenswrapper[4772]: I1122 12:18:09.617080 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xrlhb" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="registry-server" containerID="cri-o://f2892da930cfc1bbef57ac594ee9fb9cc5ce9ddef5da9f4a991c9977900c60a1" gracePeriod=2 Nov 22 12:18:10 crc kubenswrapper[4772]: I1122 12:18:10.990302 4772 generic.go:334] "Generic (PLEG): container finished" podID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerID="f2892da930cfc1bbef57ac594ee9fb9cc5ce9ddef5da9f4a991c9977900c60a1" exitCode=0 Nov 22 12:18:10 crc kubenswrapper[4772]: I1122 12:18:10.990961 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrlhb" event={"ID":"efce4fef-1a79-484b-9d48-2b7531f3dedc","Type":"ContainerDied","Data":"f2892da930cfc1bbef57ac594ee9fb9cc5ce9ddef5da9f4a991c9977900c60a1"} Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.041863 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hvhfm"] Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.051887 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hvhfm"] Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.262083 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.354766 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.354807 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.390095 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-catalog-content\") pod \"efce4fef-1a79-484b-9d48-2b7531f3dedc\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.390304 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j5pk\" (UniqueName: \"kubernetes.io/projected/efce4fef-1a79-484b-9d48-2b7531f3dedc-kube-api-access-9j5pk\") pod \"efce4fef-1a79-484b-9d48-2b7531f3dedc\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.390498 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-utilities\") pod \"efce4fef-1a79-484b-9d48-2b7531f3dedc\" (UID: \"efce4fef-1a79-484b-9d48-2b7531f3dedc\") " Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.391514 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-utilities" (OuterVolumeSpecName: "utilities") pod "efce4fef-1a79-484b-9d48-2b7531f3dedc" (UID: "efce4fef-1a79-484b-9d48-2b7531f3dedc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.398397 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efce4fef-1a79-484b-9d48-2b7531f3dedc-kube-api-access-9j5pk" (OuterVolumeSpecName: "kube-api-access-9j5pk") pod "efce4fef-1a79-484b-9d48-2b7531f3dedc" (UID: "efce4fef-1a79-484b-9d48-2b7531f3dedc"). InnerVolumeSpecName "kube-api-access-9j5pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.425412 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a4779b-3e60-4259-80b8-3d13029473f2" path="/var/lib/kubelet/pods/43a4779b-3e60-4259-80b8-3d13029473f2/volumes" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.442718 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efce4fef-1a79-484b-9d48-2b7531f3dedc" (UID: "efce4fef-1a79-484b-9d48-2b7531f3dedc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.493388 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j5pk\" (UniqueName: \"kubernetes.io/projected/efce4fef-1a79-484b-9d48-2b7531f3dedc-kube-api-access-9j5pk\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.493430 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:11 crc kubenswrapper[4772]: I1122 12:18:11.493440 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efce4fef-1a79-484b-9d48-2b7531f3dedc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.005637 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrlhb" event={"ID":"efce4fef-1a79-484b-9d48-2b7531f3dedc","Type":"ContainerDied","Data":"fb51949cd04f3ef2cfd0b47227b328d41435be03ef83f14469a486e27ba9565e"} Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.005713 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrlhb" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.006554 4772 scope.go:117] "RemoveContainer" containerID="f2892da930cfc1bbef57ac594ee9fb9cc5ce9ddef5da9f4a991c9977900c60a1" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.037764 4772 scope.go:117] "RemoveContainer" containerID="7037ffbf34fc20a5a1ced3963f57267d58fede468530d1424109227af85acb60" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.056895 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrlhb"] Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.064945 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xrlhb"] Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.077972 4772 scope.go:117] "RemoveContainer" containerID="252e3c7b62c35ed568d540d54d518537cf8dfb1c55210702efd87184e578eba2" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.416786 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dj7c8" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="registry-server" probeResult="failure" output=< Nov 22 12:18:12 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 12:18:12 crc kubenswrapper[4772]: > Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.768432 4772 scope.go:117] "RemoveContainer" containerID="f98700b08e229bfd45b36fb192cbf1cde9e4c1e819782f90ea46f5cf72735b28" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.820904 4772 scope.go:117] "RemoveContainer" containerID="320823e9ab6cfd09600f9e04b66562c8435ffdeabfb07d5431b829ac126c80bf" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.898690 4772 scope.go:117] "RemoveContainer" containerID="918f194511c5a999506f485098d5bb59055b93a4deb243d856bab5f44a1bba2c" Nov 22 12:18:12 crc kubenswrapper[4772]: I1122 12:18:12.968802 4772 scope.go:117] "RemoveContainer" containerID="81e4eec4c2d9c6f6b41b8631a4e80d7dc2b5afbdc4a39c0158d19dbffdeb9568" Nov 22 12:18:13 crc kubenswrapper[4772]: I1122 12:18:13.013893 4772 scope.go:117] "RemoveContainer" containerID="6d0a8f6cd2301e34cd099092d2d30d82dec905a57947b14494d3fffac45b7a18" Nov 22 12:18:13 crc kubenswrapper[4772]: I1122 12:18:13.425082 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" path="/var/lib/kubelet/pods/efce4fef-1a79-484b-9d48-2b7531f3dedc/volumes" Nov 22 12:18:17 crc kubenswrapper[4772]: I1122 12:18:17.413792 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:18:17 crc kubenswrapper[4772]: E1122 12:18:17.414619 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:18:21 crc kubenswrapper[4772]: I1122 12:18:21.043121 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-480f-account-create-96xkv"] Nov 22 12:18:21 crc kubenswrapper[4772]: I1122 12:18:21.052855 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-480f-account-create-96xkv"] Nov 22 12:18:21 crc kubenswrapper[4772]: I1122 12:18:21.438409 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353fe9c0-a672-4e62-a955-097502ceedf2" path="/var/lib/kubelet/pods/353fe9c0-a672-4e62-a955-097502ceedf2/volumes" Nov 22 12:18:21 crc kubenswrapper[4772]: I1122 12:18:21.439130 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:21 crc kubenswrapper[4772]: I1122 12:18:21.530700 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:21 crc kubenswrapper[4772]: I1122 12:18:21.699160 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dj7c8"] Nov 22 12:18:23 crc kubenswrapper[4772]: I1122 12:18:23.213482 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dj7c8" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="registry-server" containerID="cri-o://728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb" gracePeriod=2 Nov 22 12:18:23 crc kubenswrapper[4772]: I1122 12:18:23.788463 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:23 crc kubenswrapper[4772]: I1122 12:18:23.966584 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-utilities\") pod \"7c3d9ff1-b290-4488-bb4a-595852b508b2\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " Nov 22 12:18:23 crc kubenswrapper[4772]: I1122 12:18:23.966692 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6wqb\" (UniqueName: \"kubernetes.io/projected/7c3d9ff1-b290-4488-bb4a-595852b508b2-kube-api-access-q6wqb\") pod \"7c3d9ff1-b290-4488-bb4a-595852b508b2\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " Nov 22 12:18:23 crc kubenswrapper[4772]: I1122 12:18:23.966739 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-catalog-content\") pod \"7c3d9ff1-b290-4488-bb4a-595852b508b2\" (UID: \"7c3d9ff1-b290-4488-bb4a-595852b508b2\") " Nov 22 12:18:23 crc kubenswrapper[4772]: I1122 12:18:23.968043 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-utilities" (OuterVolumeSpecName: "utilities") pod "7c3d9ff1-b290-4488-bb4a-595852b508b2" (UID: "7c3d9ff1-b290-4488-bb4a-595852b508b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:23 crc kubenswrapper[4772]: I1122 12:18:23.971783 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3d9ff1-b290-4488-bb4a-595852b508b2-kube-api-access-q6wqb" (OuterVolumeSpecName: "kube-api-access-q6wqb") pod "7c3d9ff1-b290-4488-bb4a-595852b508b2" (UID: "7c3d9ff1-b290-4488-bb4a-595852b508b2"). InnerVolumeSpecName "kube-api-access-q6wqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.069288 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.069347 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6wqb\" (UniqueName: \"kubernetes.io/projected/7c3d9ff1-b290-4488-bb4a-595852b508b2-kube-api-access-q6wqb\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.081939 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c3d9ff1-b290-4488-bb4a-595852b508b2" (UID: "7c3d9ff1-b290-4488-bb4a-595852b508b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.171323 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d9ff1-b290-4488-bb4a-595852b508b2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.228482 4772 generic.go:334] "Generic (PLEG): container finished" podID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerID="728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb" exitCode=0 Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.228522 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj7c8" event={"ID":"7c3d9ff1-b290-4488-bb4a-595852b508b2","Type":"ContainerDied","Data":"728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb"} Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.228556 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dj7c8" event={"ID":"7c3d9ff1-b290-4488-bb4a-595852b508b2","Type":"ContainerDied","Data":"f40b98d38a4bf061025a085aa8a1bdc7c5820cc618d0151c108e467218719744"} Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.228576 4772 scope.go:117] "RemoveContainer" containerID="728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.228531 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dj7c8" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.254658 4772 scope.go:117] "RemoveContainer" containerID="089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.262883 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dj7c8"] Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.271512 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dj7c8"] Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.296077 4772 scope.go:117] "RemoveContainer" containerID="7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.327791 4772 scope.go:117] "RemoveContainer" containerID="728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb" Nov 22 12:18:24 crc kubenswrapper[4772]: E1122 12:18:24.328426 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb\": container with ID starting with 728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb not found: ID does not exist" containerID="728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.328459 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb"} err="failed to get container status \"728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb\": rpc error: code = NotFound desc = could not find container \"728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb\": container with ID starting with 728a5b19be67a899e294a34214566f5a945c8f44af74f080bae19eddb075d2cb not found: ID does not exist" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.328480 4772 scope.go:117] "RemoveContainer" containerID="089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c" Nov 22 12:18:24 crc kubenswrapper[4772]: E1122 12:18:24.328721 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c\": container with ID starting with 089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c not found: ID does not exist" containerID="089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.328740 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c"} err="failed to get container status \"089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c\": rpc error: code = NotFound desc = could not find container \"089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c\": container with ID starting with 089d822d92728bc1b299dbb2f11f573b12289723b5fc35717555070c97cf6a2c not found: ID does not exist" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.328755 4772 scope.go:117] "RemoveContainer" containerID="7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c" Nov 22 12:18:24 crc kubenswrapper[4772]: E1122 12:18:24.328985 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c\": container with ID starting with 7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c not found: ID does not exist" containerID="7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c" Nov 22 12:18:24 crc kubenswrapper[4772]: I1122 12:18:24.329013 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c"} err="failed to get container status \"7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c\": rpc error: code = NotFound desc = could not find container \"7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c\": container with ID starting with 7ae85a574f5a3d4e6b92b50b7f915c33b510effada5c46a3e601c70e0f20e92c not found: ID does not exist" Nov 22 12:18:25 crc kubenswrapper[4772]: I1122 12:18:25.429593 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" path="/var/lib/kubelet/pods/7c3d9ff1-b290-4488-bb4a-595852b508b2/volumes" Nov 22 12:18:28 crc kubenswrapper[4772]: I1122 12:18:28.416456 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:18:28 crc kubenswrapper[4772]: E1122 12:18:28.417257 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:18:30 crc kubenswrapper[4772]: I1122 12:18:30.054156 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fpk6z"] Nov 22 12:18:30 crc kubenswrapper[4772]: I1122 12:18:30.065965 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fpk6z"] Nov 22 12:18:31 crc kubenswrapper[4772]: I1122 12:18:31.436325 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf86649f-7929-4772-9aad-c40ab73ee61c" path="/var/lib/kubelet/pods/bf86649f-7929-4772-9aad-c40ab73ee61c/volumes" Nov 22 12:18:43 crc kubenswrapper[4772]: I1122 12:18:43.415421 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:18:43 crc kubenswrapper[4772]: E1122 12:18:43.417527 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.226893 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d5f74986f-pcncb"] Nov 22 12:18:49 crc kubenswrapper[4772]: E1122 12:18:49.235935 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="registry-server" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.236032 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="registry-server" Nov 22 12:18:49 crc kubenswrapper[4772]: E1122 12:18:49.236152 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="extract-content" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.236232 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="extract-content" Nov 22 12:18:49 crc kubenswrapper[4772]: E1122 12:18:49.236311 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="extract-utilities" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.236373 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="extract-utilities" Nov 22 12:18:49 crc kubenswrapper[4772]: E1122 12:18:49.236450 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="extract-content" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.236510 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="extract-content" Nov 22 12:18:49 crc kubenswrapper[4772]: E1122 12:18:49.236572 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="extract-utilities" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.236632 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="extract-utilities" Nov 22 12:18:49 crc kubenswrapper[4772]: E1122 12:18:49.236699 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="registry-server" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.236754 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="registry-server" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.237056 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="efce4fef-1a79-484b-9d48-2b7531f3dedc" containerName="registry-server" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.237146 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3d9ff1-b290-4488-bb4a-595852b508b2" containerName="registry-server" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.238657 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.262158 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.262650 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xm7gk" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.262356 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.263872 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.305253 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d5f74986f-pcncb"] Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.315654 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ea1ae0-b297-409e-829f-4f33a81969f7-logs\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.316132 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-config-data\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.316509 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42js\" (UniqueName: \"kubernetes.io/projected/51ea1ae0-b297-409e-829f-4f33a81969f7-kube-api-access-x42js\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.316563 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-scripts\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.316665 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ea1ae0-b297-409e-829f-4f33a81969f7-horizon-secret-key\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.348958 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.349366 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-log" containerID="cri-o://e3090aafb8ce947930a69231fc5c63e9272c4052f297fd0b9d3de9ccbf57f082" gracePeriod=30 Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.349533 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-httpd" containerID="cri-o://b57a5ef889b4949e6e2e2507385733a8946bc153b1f2aeeb78f80a9763aa4a59" gracePeriod=30 Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.425265 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42js\" (UniqueName: \"kubernetes.io/projected/51ea1ae0-b297-409e-829f-4f33a81969f7-kube-api-access-x42js\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.425324 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-scripts\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.425364 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ea1ae0-b297-409e-829f-4f33a81969f7-horizon-secret-key\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.425407 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ea1ae0-b297-409e-829f-4f33a81969f7-logs\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.425481 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-config-data\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.426638 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ea1ae0-b297-409e-829f-4f33a81969f7-logs\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.427296 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-scripts\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.429366 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-config-data\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.461578 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-556dc567c7-q9m47"] Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.471514 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ea1ae0-b297-409e-829f-4f33a81969f7-horizon-secret-key\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.478278 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42js\" (UniqueName: \"kubernetes.io/projected/51ea1ae0-b297-409e-829f-4f33a81969f7-kube-api-access-x42js\") pod \"horizon-7d5f74986f-pcncb\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.482341 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.483193 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-556dc567c7-q9m47"] Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.483578 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-log" containerID="cri-o://1439d7fdb0c31c5001033ba23fdd357d53fda61aa290d48a078dda6a90a89ea2" gracePeriod=30 Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.483740 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-httpd" containerID="cri-o://2bf23bb37c70f25de81fd2d4252ec45250ee3aa6798dda98db7d14bc6ba43d5d" gracePeriod=30 Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.482453 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.564781 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerID="e3090aafb8ce947930a69231fc5c63e9272c4052f297fd0b9d3de9ccbf57f082" exitCode=143 Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.564854 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2af13a2-6cfc-469f-a829-b2cecbfd7129","Type":"ContainerDied","Data":"e3090aafb8ce947930a69231fc5c63e9272c4052f297fd0b9d3de9ccbf57f082"} Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.595272 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.643157 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-config-data\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.643718 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b524f31a-4a4d-4d71-b51f-5e438b5e183c-logs\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.643782 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b524f31a-4a4d-4d71-b51f-5e438b5e183c-horizon-secret-key\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.643843 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmsj\" (UniqueName: \"kubernetes.io/projected/b524f31a-4a4d-4d71-b51f-5e438b5e183c-kube-api-access-7lmsj\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.643887 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-scripts\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.749536 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-config-data\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.749628 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b524f31a-4a4d-4d71-b51f-5e438b5e183c-logs\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.749708 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b524f31a-4a4d-4d71-b51f-5e438b5e183c-horizon-secret-key\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.749789 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmsj\" (UniqueName: \"kubernetes.io/projected/b524f31a-4a4d-4d71-b51f-5e438b5e183c-kube-api-access-7lmsj\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.749859 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-scripts\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.750551 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b524f31a-4a4d-4d71-b51f-5e438b5e183c-logs\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.751730 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-scripts\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.752214 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-config-data\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.764242 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b524f31a-4a4d-4d71-b51f-5e438b5e183c-horizon-secret-key\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.783526 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmsj\" (UniqueName: \"kubernetes.io/projected/b524f31a-4a4d-4d71-b51f-5e438b5e183c-kube-api-access-7lmsj\") pod \"horizon-556dc567c7-q9m47\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:49 crc kubenswrapper[4772]: I1122 12:18:49.868167 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.070964 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-556dc567c7-q9m47"] Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.113274 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d58fc79d5-75vz8"] Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.115505 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.127826 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d58fc79d5-75vz8"] Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.184904 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d5f74986f-pcncb"] Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.196869 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.263148 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-config-data\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.263225 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-scripts\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.263258 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3609b5-6832-4242-8b90-bcc0c15544b5-logs\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.263365 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbjf\" (UniqueName: \"kubernetes.io/projected/fe3609b5-6832-4242-8b90-bcc0c15544b5-kube-api-access-bkbjf\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.263419 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe3609b5-6832-4242-8b90-bcc0c15544b5-horizon-secret-key\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.366829 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-config-data\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.366915 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-scripts\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.366952 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3609b5-6832-4242-8b90-bcc0c15544b5-logs\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.366992 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkbjf\" (UniqueName: \"kubernetes.io/projected/fe3609b5-6832-4242-8b90-bcc0c15544b5-kube-api-access-bkbjf\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.367699 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe3609b5-6832-4242-8b90-bcc0c15544b5-horizon-secret-key\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.367767 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3609b5-6832-4242-8b90-bcc0c15544b5-logs\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.368554 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-scripts\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.368742 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-config-data\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.376529 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe3609b5-6832-4242-8b90-bcc0c15544b5-horizon-secret-key\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.387235 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkbjf\" (UniqueName: \"kubernetes.io/projected/fe3609b5-6832-4242-8b90-bcc0c15544b5-kube-api-access-bkbjf\") pod \"horizon-d58fc79d5-75vz8\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.456705 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.458678 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-556dc567c7-q9m47"] Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.594626 4772 generic.go:334] "Generic (PLEG): container finished" podID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerID="1439d7fdb0c31c5001033ba23fdd357d53fda61aa290d48a078dda6a90a89ea2" exitCode=143 Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.594693 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1c011b9-37e2-47b1-b7fe-ad03217939d3","Type":"ContainerDied","Data":"1439d7fdb0c31c5001033ba23fdd357d53fda61aa290d48a078dda6a90a89ea2"} Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.597184 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-556dc567c7-q9m47" event={"ID":"b524f31a-4a4d-4d71-b51f-5e438b5e183c","Type":"ContainerStarted","Data":"58caece266ca2c1b3db4f2256d1618a244bd6785864e4e821f9486c12f268c47"} Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.603735 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d5f74986f-pcncb" event={"ID":"51ea1ae0-b297-409e-829f-4f33a81969f7","Type":"ContainerStarted","Data":"c62812fe75fcf4efe131c0c1013a2029e29fe8c892a5bfd990b2ee2808d8b0d1"} Nov 22 12:18:50 crc kubenswrapper[4772]: I1122 12:18:50.979128 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d58fc79d5-75vz8"] Nov 22 12:18:51 crc kubenswrapper[4772]: I1122 12:18:51.625254 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d58fc79d5-75vz8" event={"ID":"fe3609b5-6832-4242-8b90-bcc0c15544b5","Type":"ContainerStarted","Data":"c480fa91bfe4ce838c0121e9cdf6ca0049633ac4092cdbd7b09c1c9e85160cf1"} Nov 22 12:18:53 crc kubenswrapper[4772]: I1122 12:18:53.654817 4772 generic.go:334] "Generic (PLEG): container finished" podID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerID="2bf23bb37c70f25de81fd2d4252ec45250ee3aa6798dda98db7d14bc6ba43d5d" exitCode=0 Nov 22 12:18:53 crc kubenswrapper[4772]: I1122 12:18:53.655104 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1c011b9-37e2-47b1-b7fe-ad03217939d3","Type":"ContainerDied","Data":"2bf23bb37c70f25de81fd2d4252ec45250ee3aa6798dda98db7d14bc6ba43d5d"} Nov 22 12:18:53 crc kubenswrapper[4772]: I1122 12:18:53.660556 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerID="b57a5ef889b4949e6e2e2507385733a8946bc153b1f2aeeb78f80a9763aa4a59" exitCode=0 Nov 22 12:18:53 crc kubenswrapper[4772]: I1122 12:18:53.660590 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2af13a2-6cfc-469f-a829-b2cecbfd7129","Type":"ContainerDied","Data":"b57a5ef889b4949e6e2e2507385733a8946bc153b1f2aeeb78f80a9763aa4a59"} Nov 22 12:18:54 crc kubenswrapper[4772]: I1122 12:18:54.414274 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:18:54 crc kubenswrapper[4772]: E1122 12:18:54.415087 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.071867 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.183027 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-scripts\") pod \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.183243 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-combined-ca-bundle\") pod \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.183278 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-logs\") pod \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.183317 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-httpd-run\") pod \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.183367 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79shj\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-kube-api-access-79shj\") pod \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.183460 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-config-data\") pod \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.183595 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-ceph\") pod \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\" (UID: \"a1c011b9-37e2-47b1-b7fe-ad03217939d3\") " Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.186753 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a1c011b9-37e2-47b1-b7fe-ad03217939d3" (UID: "a1c011b9-37e2-47b1-b7fe-ad03217939d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.186766 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-logs" (OuterVolumeSpecName: "logs") pod "a1c011b9-37e2-47b1-b7fe-ad03217939d3" (UID: "a1c011b9-37e2-47b1-b7fe-ad03217939d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.198650 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-kube-api-access-79shj" (OuterVolumeSpecName: "kube-api-access-79shj") pod "a1c011b9-37e2-47b1-b7fe-ad03217939d3" (UID: "a1c011b9-37e2-47b1-b7fe-ad03217939d3"). InnerVolumeSpecName "kube-api-access-79shj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.214404 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-scripts" (OuterVolumeSpecName: "scripts") pod "a1c011b9-37e2-47b1-b7fe-ad03217939d3" (UID: "a1c011b9-37e2-47b1-b7fe-ad03217939d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.221271 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-ceph" (OuterVolumeSpecName: "ceph") pod "a1c011b9-37e2-47b1-b7fe-ad03217939d3" (UID: "a1c011b9-37e2-47b1-b7fe-ad03217939d3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.294875 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.295996 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.296685 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.297176 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1c011b9-37e2-47b1-b7fe-ad03217939d3-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.297276 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79shj\" (UniqueName: \"kubernetes.io/projected/a1c011b9-37e2-47b1-b7fe-ad03217939d3-kube-api-access-79shj\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.295512 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-config-data" (OuterVolumeSpecName: "config-data") pod "a1c011b9-37e2-47b1-b7fe-ad03217939d3" (UID: "a1c011b9-37e2-47b1-b7fe-ad03217939d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.295840 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1c011b9-37e2-47b1-b7fe-ad03217939d3" (UID: "a1c011b9-37e2-47b1-b7fe-ad03217939d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.399865 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:58 crc kubenswrapper[4772]: I1122 12:18:58.400603 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c011b9-37e2-47b1-b7fe-ad03217939d3-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.736888 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1c011b9-37e2-47b1-b7fe-ad03217939d3","Type":"ContainerDied","Data":"678e34d7b6889f8aec2c2fb63b1efe51430767e40665f3a4fb3109b32f729ef8"} Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.736942 4772 scope.go:117] "RemoveContainer" containerID="2bf23bb37c70f25de81fd2d4252ec45250ee3aa6798dda98db7d14bc6ba43d5d" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.737138 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.744514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-556dc567c7-q9m47" event={"ID":"b524f31a-4a4d-4d71-b51f-5e438b5e183c","Type":"ContainerStarted","Data":"04e47016575c151e6a576dea52958709ef046ae474396b5c5e9e9bc4c6c5814e"} Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.744551 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-556dc567c7-q9m47" event={"ID":"b524f31a-4a4d-4d71-b51f-5e438b5e183c","Type":"ContainerStarted","Data":"bc04c863118bd6ae076bb4da7023485fa8080a99ae237a3667f3f208ab7680c9"} Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.744688 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-556dc567c7-q9m47" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon-log" containerID="cri-o://bc04c863118bd6ae076bb4da7023485fa8080a99ae237a3667f3f208ab7680c9" gracePeriod=30 Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.744963 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-556dc567c7-q9m47" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon" containerID="cri-o://04e47016575c151e6a576dea52958709ef046ae474396b5c5e9e9bc4c6c5814e" gracePeriod=30 Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.755821 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d58fc79d5-75vz8" event={"ID":"fe3609b5-6832-4242-8b90-bcc0c15544b5","Type":"ContainerStarted","Data":"5fb3b1dfc8fac4b6da766f35bb69563c8d04f239c1595ae9cdc641f3a2ec3654"} Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.756750 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d58fc79d5-75vz8" event={"ID":"fe3609b5-6832-4242-8b90-bcc0c15544b5","Type":"ContainerStarted","Data":"b01906114a25f42e086e4e48566f34890dd0a8bff8b33c4ed6038332c0f4f005"} Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.764743 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d5f74986f-pcncb" event={"ID":"51ea1ae0-b297-409e-829f-4f33a81969f7","Type":"ContainerStarted","Data":"69293c9df5c2f8b2e7d541fb5e4c2dc058c39b4e2b3bf7a300e083eb250624f0"} Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.764774 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d5f74986f-pcncb" event={"ID":"51ea1ae0-b297-409e-829f-4f33a81969f7","Type":"ContainerStarted","Data":"fdc8bc4a43fa25c2450e214bdf10d0cb5fc7fee817c6622e9c94a6fd45dd8bb9"} Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.773599 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-556dc567c7-q9m47" podStartSLOduration=2.605902466 podStartE2EDuration="9.773571806s" podCreationTimestamp="2025-11-22 12:18:49 +0000 UTC" firstStartedPulling="2025-11-22 12:18:50.483469129 +0000 UTC m=+6050.722913623" lastFinishedPulling="2025-11-22 12:18:57.651138459 +0000 UTC m=+6057.890582963" observedRunningTime="2025-11-22 12:18:58.772728255 +0000 UTC m=+6059.012172759" watchObservedRunningTime="2025-11-22 12:18:58.773571806 +0000 UTC m=+6059.013016300" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.796006 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d5f74986f-pcncb" podStartSLOduration=2.387127338 podStartE2EDuration="9.795986724s" podCreationTimestamp="2025-11-22 12:18:49 +0000 UTC" firstStartedPulling="2025-11-22 12:18:50.196482783 +0000 UTC m=+6050.435927277" lastFinishedPulling="2025-11-22 12:18:57.605342159 +0000 UTC m=+6057.844786663" observedRunningTime="2025-11-22 12:18:58.794691601 +0000 UTC m=+6059.034136095" watchObservedRunningTime="2025-11-22 12:18:58.795986724 +0000 UTC m=+6059.035431218" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.797383 4772 scope.go:117] "RemoveContainer" containerID="1439d7fdb0c31c5001033ba23fdd357d53fda61aa290d48a078dda6a90a89ea2" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.845610 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d58fc79d5-75vz8" podStartSLOduration=2.2306931309999998 podStartE2EDuration="8.845583298s" podCreationTimestamp="2025-11-22 12:18:50 +0000 UTC" firstStartedPulling="2025-11-22 12:18:50.99198542 +0000 UTC m=+6051.231429914" lastFinishedPulling="2025-11-22 12:18:57.606875587 +0000 UTC m=+6057.846320081" observedRunningTime="2025-11-22 12:18:58.81794697 +0000 UTC m=+6059.057391464" watchObservedRunningTime="2025-11-22 12:18:58.845583298 +0000 UTC m=+6059.085027792" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.886599 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.902224 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.917080 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:18:59 crc kubenswrapper[4772]: E1122 12:18:58.917816 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-log" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.917837 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-log" Nov 22 12:18:59 crc kubenswrapper[4772]: E1122 12:18:58.917913 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-httpd" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.917921 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-httpd" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.918215 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-log" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.918247 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" containerName="glance-httpd" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.920065 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.928179 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:58.932763 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.026075 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.026472 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.026518 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.026558 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kzxj\" (UniqueName: \"kubernetes.io/projected/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-kube-api-access-9kzxj\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.026692 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.026867 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.026912 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.129292 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.129358 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.129399 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.129426 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.129617 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.129644 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kzxj\" (UniqueName: \"kubernetes.io/projected/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-kube-api-access-9kzxj\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.129755 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.130245 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.130247 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.133997 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.137660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.138346 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.142748 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.151174 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kzxj\" (UniqueName: \"kubernetes.io/projected/fd10b3de-dc99-4463-ae4b-30ea9aaa642e-kube-api-access-9kzxj\") pod \"glance-default-internal-api-0\" (UID: \"fd10b3de-dc99-4463-ae4b-30ea9aaa642e\") " pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.243835 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.438832 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c011b9-37e2-47b1-b7fe-ad03217939d3" path="/var/lib/kubelet/pods/a1c011b9-37e2-47b1-b7fe-ad03217939d3/volumes" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.595947 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.597263 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.869574 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.892931 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.966460 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfq5r\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-kube-api-access-qfq5r\") pod \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.966527 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-combined-ca-bundle\") pod \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.966556 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-scripts\") pod \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.966591 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-ceph\") pod \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.966713 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-httpd-run\") pod \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.966769 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-config-data\") pod \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.966814 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-logs\") pod \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\" (UID: \"d2af13a2-6cfc-469f-a829-b2cecbfd7129\") " Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.967842 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d2af13a2-6cfc-469f-a829-b2cecbfd7129" (UID: "d2af13a2-6cfc-469f-a829-b2cecbfd7129"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.970366 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-logs" (OuterVolumeSpecName: "logs") pod "d2af13a2-6cfc-469f-a829-b2cecbfd7129" (UID: "d2af13a2-6cfc-469f-a829-b2cecbfd7129"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.975914 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-scripts" (OuterVolumeSpecName: "scripts") pod "d2af13a2-6cfc-469f-a829-b2cecbfd7129" (UID: "d2af13a2-6cfc-469f-a829-b2cecbfd7129"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.979328 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-ceph" (OuterVolumeSpecName: "ceph") pod "d2af13a2-6cfc-469f-a829-b2cecbfd7129" (UID: "d2af13a2-6cfc-469f-a829-b2cecbfd7129"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:18:59 crc kubenswrapper[4772]: I1122 12:18:59.999333 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-kube-api-access-qfq5r" (OuterVolumeSpecName: "kube-api-access-qfq5r") pod "d2af13a2-6cfc-469f-a829-b2cecbfd7129" (UID: "d2af13a2-6cfc-469f-a829-b2cecbfd7129"). InnerVolumeSpecName "kube-api-access-qfq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.027288 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2af13a2-6cfc-469f-a829-b2cecbfd7129" (UID: "d2af13a2-6cfc-469f-a829-b2cecbfd7129"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.070355 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfq5r\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-kube-api-access-qfq5r\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.070414 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.070430 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.070443 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2af13a2-6cfc-469f-a829-b2cecbfd7129-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.070457 4772 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.070469 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2af13a2-6cfc-469f-a829-b2cecbfd7129-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.162179 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-config-data" (OuterVolumeSpecName: "config-data") pod "d2af13a2-6cfc-469f-a829-b2cecbfd7129" (UID: "d2af13a2-6cfc-469f-a829-b2cecbfd7129"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.182651 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.186517 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13a2-6cfc-469f-a829-b2cecbfd7129-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.457500 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.457626 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.804789 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d2af13a2-6cfc-469f-a829-b2cecbfd7129","Type":"ContainerDied","Data":"c1e1e8072f3ae7ed10e651d41cd71a2bd4807791936532ac034438ebbfbaf6e9"} Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.807008 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd10b3de-dc99-4463-ae4b-30ea9aaa642e","Type":"ContainerStarted","Data":"ba8ca416d1d476e8a7e4a54ccf0ca2458115c78f71ce0ccb6257bf1b7399502f"} Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.807068 4772 scope.go:117] "RemoveContainer" containerID="b57a5ef889b4949e6e2e2507385733a8946bc153b1f2aeeb78f80a9763aa4a59" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.804946 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.845779 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.852749 4772 scope.go:117] "RemoveContainer" containerID="e3090aafb8ce947930a69231fc5c63e9272c4052f297fd0b9d3de9ccbf57f082" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.868742 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.887772 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:19:00 crc kubenswrapper[4772]: E1122 12:19:00.888210 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-httpd" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.888229 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-httpd" Nov 22 12:19:00 crc kubenswrapper[4772]: E1122 12:19:00.888277 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-log" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.888285 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-log" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.888455 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-log" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.888486 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" containerName="glance-httpd" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.889668 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.900407 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 12:19:00 crc kubenswrapper[4772]: I1122 12:19:00.912675 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.015491 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a02af0f4-1267-436a-9c11-e7ef2444906c-ceph\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.015543 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-scripts\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.015587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a02af0f4-1267-436a-9c11-e7ef2444906c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.015620 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.015667 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02af0f4-1267-436a-9c11-e7ef2444906c-logs\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.015701 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fnv\" (UniqueName: \"kubernetes.io/projected/a02af0f4-1267-436a-9c11-e7ef2444906c-kube-api-access-g7fnv\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.015727 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-config-data\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.121854 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02af0f4-1267-436a-9c11-e7ef2444906c-logs\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.121961 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fnv\" (UniqueName: \"kubernetes.io/projected/a02af0f4-1267-436a-9c11-e7ef2444906c-kube-api-access-g7fnv\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.122016 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-config-data\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.122212 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a02af0f4-1267-436a-9c11-e7ef2444906c-ceph\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.122241 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-scripts\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.122285 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a02af0f4-1267-436a-9c11-e7ef2444906c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.122328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.122578 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a02af0f4-1267-436a-9c11-e7ef2444906c-logs\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.122872 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a02af0f4-1267-436a-9c11-e7ef2444906c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.126280 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-scripts\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.127334 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a02af0f4-1267-436a-9c11-e7ef2444906c-ceph\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.129159 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.131179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02af0f4-1267-436a-9c11-e7ef2444906c-config-data\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.153796 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fnv\" (UniqueName: \"kubernetes.io/projected/a02af0f4-1267-436a-9c11-e7ef2444906c-kube-api-access-g7fnv\") pod \"glance-default-external-api-0\" (UID: \"a02af0f4-1267-436a-9c11-e7ef2444906c\") " pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.237808 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.545029 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2af13a2-6cfc-469f-a829-b2cecbfd7129" path="/var/lib/kubelet/pods/d2af13a2-6cfc-469f-a829-b2cecbfd7129/volumes" Nov 22 12:19:01 crc kubenswrapper[4772]: I1122 12:19:01.824023 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd10b3de-dc99-4463-ae4b-30ea9aaa642e","Type":"ContainerStarted","Data":"b56947c54a9b5060cd0f94fa228c5dd39c1a62cd106a095180d472e669241874"} Nov 22 12:19:02 crc kubenswrapper[4772]: I1122 12:19:02.115481 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 12:19:02 crc kubenswrapper[4772]: W1122 12:19:02.121833 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda02af0f4_1267_436a_9c11_e7ef2444906c.slice/crio-3a841b2093c458cca047a0025a76c96a057940f6e6441734314d55899908ef95 WatchSource:0}: Error finding container 3a841b2093c458cca047a0025a76c96a057940f6e6441734314d55899908ef95: Status 404 returned error can't find the container with id 3a841b2093c458cca047a0025a76c96a057940f6e6441734314d55899908ef95 Nov 22 12:19:02 crc kubenswrapper[4772]: I1122 12:19:02.845469 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd10b3de-dc99-4463-ae4b-30ea9aaa642e","Type":"ContainerStarted","Data":"dd244d0fd24fbc54268e163601e22472bd4601f9b22451dfb772bc6aeeb22104"} Nov 22 12:19:02 crc kubenswrapper[4772]: I1122 12:19:02.850958 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a02af0f4-1267-436a-9c11-e7ef2444906c","Type":"ContainerStarted","Data":"f19407ffcbfcec593709907e3cd5b8c68cf9119a24d2e176cac8b29a57a8704b"} Nov 22 12:19:02 crc kubenswrapper[4772]: I1122 12:19:02.850996 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a02af0f4-1267-436a-9c11-e7ef2444906c","Type":"ContainerStarted","Data":"3a841b2093c458cca047a0025a76c96a057940f6e6441734314d55899908ef95"} Nov 22 12:19:02 crc kubenswrapper[4772]: I1122 12:19:02.883883 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.883844053 podStartE2EDuration="4.883844053s" podCreationTimestamp="2025-11-22 12:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:19:02.87410683 +0000 UTC m=+6063.113551334" watchObservedRunningTime="2025-11-22 12:19:02.883844053 +0000 UTC m=+6063.123288557" Nov 22 12:19:03 crc kubenswrapper[4772]: I1122 12:19:03.864076 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a02af0f4-1267-436a-9c11-e7ef2444906c","Type":"ContainerStarted","Data":"ea53e31934c1e982f0f27f65ac022c9ff48eda9c8fba78a13668990d1d94b047"} Nov 22 12:19:03 crc kubenswrapper[4772]: I1122 12:19:03.891958 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.891939902 podStartE2EDuration="3.891939902s" podCreationTimestamp="2025-11-22 12:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:19:03.888487687 +0000 UTC m=+6064.127932181" watchObservedRunningTime="2025-11-22 12:19:03.891939902 +0000 UTC m=+6064.131384396" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.244612 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.245769 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.299405 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.301241 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.414667 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:19:09 crc kubenswrapper[4772]: E1122 12:19:09.415165 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.599151 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d5f74986f-pcncb" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.114:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.114:8080: connect: connection refused" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.938378 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:09 crc kubenswrapper[4772]: I1122 12:19:09.938424 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:10 crc kubenswrapper[4772]: I1122 12:19:10.459235 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d58fc79d5-75vz8" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.239200 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.239262 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.278860 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.311302 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.923182 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.966007 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.967726 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 12:19:11 crc kubenswrapper[4772]: I1122 12:19:11.967781 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 12:19:12 crc kubenswrapper[4772]: I1122 12:19:12.024295 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 12:19:13 crc kubenswrapper[4772]: I1122 12:19:13.069202 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-j2wlg"] Nov 22 12:19:13 crc kubenswrapper[4772]: I1122 12:19:13.082012 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-j2wlg"] Nov 22 12:19:13 crc kubenswrapper[4772]: I1122 12:19:13.232579 4772 scope.go:117] "RemoveContainer" containerID="2b56640bf9a069465b087a3b974f2dd92ede2c9346e893a7218878240bc5a19a" Nov 22 12:19:13 crc kubenswrapper[4772]: I1122 12:19:13.273694 4772 scope.go:117] "RemoveContainer" containerID="d4e54b201244ffb67a1a758ca349d585e16c03b95b4b2eb8b159d201e8ec68f8" Nov 22 12:19:13 crc kubenswrapper[4772]: I1122 12:19:13.430796 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4" path="/var/lib/kubelet/pods/c7a8bfda-da8c-42ec-b0f4-b88bff6a3cb4/volumes" Nov 22 12:19:14 crc kubenswrapper[4772]: I1122 12:19:14.070884 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 12:19:14 crc kubenswrapper[4772]: I1122 12:19:14.071422 4772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 12:19:14 crc kubenswrapper[4772]: I1122 12:19:14.103890 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 12:19:21 crc kubenswrapper[4772]: I1122 12:19:21.460781 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:19:22 crc kubenswrapper[4772]: I1122 12:19:22.031682 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c044-account-create-m5bhq"] Nov 22 12:19:22 crc kubenswrapper[4772]: I1122 12:19:22.043160 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c044-account-create-m5bhq"] Nov 22 12:19:22 crc kubenswrapper[4772]: I1122 12:19:22.328528 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:19:22 crc kubenswrapper[4772]: I1122 12:19:22.413791 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:19:22 crc kubenswrapper[4772]: E1122 12:19:22.414100 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:19:23 crc kubenswrapper[4772]: I1122 12:19:23.108491 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:19:23 crc kubenswrapper[4772]: I1122 12:19:23.430089 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c35d6d-c880-4170-95bf-426d5556c009" path="/var/lib/kubelet/pods/f7c35d6d-c880-4170-95bf-426d5556c009/volumes" Nov 22 12:19:23 crc kubenswrapper[4772]: I1122 12:19:23.959601 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:19:24 crc kubenswrapper[4772]: I1122 12:19:24.045294 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d5f74986f-pcncb"] Nov 22 12:19:24 crc kubenswrapper[4772]: I1122 12:19:24.122227 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d5f74986f-pcncb" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon-log" containerID="cri-o://fdc8bc4a43fa25c2450e214bdf10d0cb5fc7fee817c6622e9c94a6fd45dd8bb9" gracePeriod=30 Nov 22 12:19:24 crc kubenswrapper[4772]: I1122 12:19:24.122350 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d5f74986f-pcncb" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" containerID="cri-o://69293c9df5c2f8b2e7d541fb5e4c2dc058c39b4e2b3bf7a300e083eb250624f0" gracePeriod=30 Nov 22 12:19:28 crc kubenswrapper[4772]: I1122 12:19:28.173772 4772 generic.go:334] "Generic (PLEG): container finished" podID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerID="69293c9df5c2f8b2e7d541fb5e4c2dc058c39b4e2b3bf7a300e083eb250624f0" exitCode=0 Nov 22 12:19:28 crc kubenswrapper[4772]: I1122 12:19:28.173836 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d5f74986f-pcncb" event={"ID":"51ea1ae0-b297-409e-829f-4f33a81969f7","Type":"ContainerDied","Data":"69293c9df5c2f8b2e7d541fb5e4c2dc058c39b4e2b3bf7a300e083eb250624f0"} Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.190250 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-556dc567c7-q9m47" event={"ID":"b524f31a-4a4d-4d71-b51f-5e438b5e183c","Type":"ContainerDied","Data":"04e47016575c151e6a576dea52958709ef046ae474396b5c5e9e9bc4c6c5814e"} Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.190187 4772 generic.go:334] "Generic (PLEG): container finished" podID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerID="04e47016575c151e6a576dea52958709ef046ae474396b5c5e9e9bc4c6c5814e" exitCode=137 Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.190377 4772 generic.go:334] "Generic (PLEG): container finished" podID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerID="bc04c863118bd6ae076bb4da7023485fa8080a99ae237a3667f3f208ab7680c9" exitCode=137 Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.190426 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-556dc567c7-q9m47" event={"ID":"b524f31a-4a4d-4d71-b51f-5e438b5e183c","Type":"ContainerDied","Data":"bc04c863118bd6ae076bb4da7023485fa8080a99ae237a3667f3f208ab7680c9"} Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.420572 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.481228 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b524f31a-4a4d-4d71-b51f-5e438b5e183c-horizon-secret-key\") pod \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.481622 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lmsj\" (UniqueName: \"kubernetes.io/projected/b524f31a-4a4d-4d71-b51f-5e438b5e183c-kube-api-access-7lmsj\") pod \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.481702 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-scripts\") pod \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.481728 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b524f31a-4a4d-4d71-b51f-5e438b5e183c-logs\") pod \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.481786 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-config-data\") pod \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\" (UID: \"b524f31a-4a4d-4d71-b51f-5e438b5e183c\") " Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.483917 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b524f31a-4a4d-4d71-b51f-5e438b5e183c-logs" (OuterVolumeSpecName: "logs") pod "b524f31a-4a4d-4d71-b51f-5e438b5e183c" (UID: "b524f31a-4a4d-4d71-b51f-5e438b5e183c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.503203 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b524f31a-4a4d-4d71-b51f-5e438b5e183c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b524f31a-4a4d-4d71-b51f-5e438b5e183c" (UID: "b524f31a-4a4d-4d71-b51f-5e438b5e183c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.529812 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b524f31a-4a4d-4d71-b51f-5e438b5e183c-kube-api-access-7lmsj" (OuterVolumeSpecName: "kube-api-access-7lmsj") pod "b524f31a-4a4d-4d71-b51f-5e438b5e183c" (UID: "b524f31a-4a4d-4d71-b51f-5e438b5e183c"). InnerVolumeSpecName "kube-api-access-7lmsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.578456 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-scripts" (OuterVolumeSpecName: "scripts") pod "b524f31a-4a4d-4d71-b51f-5e438b5e183c" (UID: "b524f31a-4a4d-4d71-b51f-5e438b5e183c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.586628 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-config-data" (OuterVolumeSpecName: "config-data") pod "b524f31a-4a4d-4d71-b51f-5e438b5e183c" (UID: "b524f31a-4a4d-4d71-b51f-5e438b5e183c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.595672 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b524f31a-4a4d-4d71-b51f-5e438b5e183c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.595781 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lmsj\" (UniqueName: \"kubernetes.io/projected/b524f31a-4a4d-4d71-b51f-5e438b5e183c-kube-api-access-7lmsj\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.595850 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.595922 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b524f31a-4a4d-4d71-b51f-5e438b5e183c-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.596020 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b524f31a-4a4d-4d71-b51f-5e438b5e183c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:29 crc kubenswrapper[4772]: I1122 12:19:29.601618 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d5f74986f-pcncb" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.114:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.114:8080: connect: connection refused" Nov 22 12:19:30 crc kubenswrapper[4772]: I1122 12:19:30.202026 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-556dc567c7-q9m47" event={"ID":"b524f31a-4a4d-4d71-b51f-5e438b5e183c","Type":"ContainerDied","Data":"58caece266ca2c1b3db4f2256d1618a244bd6785864e4e821f9486c12f268c47"} Nov 22 12:19:30 crc kubenswrapper[4772]: I1122 12:19:30.202603 4772 scope.go:117] "RemoveContainer" containerID="04e47016575c151e6a576dea52958709ef046ae474396b5c5e9e9bc4c6c5814e" Nov 22 12:19:30 crc kubenswrapper[4772]: I1122 12:19:30.202370 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-556dc567c7-q9m47" Nov 22 12:19:30 crc kubenswrapper[4772]: I1122 12:19:30.241296 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-556dc567c7-q9m47"] Nov 22 12:19:30 crc kubenswrapper[4772]: I1122 12:19:30.248577 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-556dc567c7-q9m47"] Nov 22 12:19:30 crc kubenswrapper[4772]: I1122 12:19:30.434465 4772 scope.go:117] "RemoveContainer" containerID="bc04c863118bd6ae076bb4da7023485fa8080a99ae237a3667f3f208ab7680c9" Nov 22 12:19:31 crc kubenswrapper[4772]: I1122 12:19:31.070824 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-54vnx"] Nov 22 12:19:31 crc kubenswrapper[4772]: I1122 12:19:31.085017 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-54vnx"] Nov 22 12:19:31 crc kubenswrapper[4772]: I1122 12:19:31.433753 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983e95e3-73ea-42da-8839-a3c85916c3c1" path="/var/lib/kubelet/pods/983e95e3-73ea-42da-8839-a3c85916c3c1/volumes" Nov 22 12:19:31 crc kubenswrapper[4772]: I1122 12:19:31.435243 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" path="/var/lib/kubelet/pods/b524f31a-4a4d-4d71-b51f-5e438b5e183c/volumes" Nov 22 12:19:35 crc kubenswrapper[4772]: I1122 12:19:35.418006 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:19:36 crc kubenswrapper[4772]: I1122 12:19:36.275709 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"4e00768099367f2f555dffd8edb69c3a4098741a37029cc7349a1e6bb1cbc5a1"} Nov 22 12:19:39 crc kubenswrapper[4772]: I1122 12:19:39.596893 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d5f74986f-pcncb" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.114:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.114:8080: connect: connection refused" Nov 22 12:19:49 crc kubenswrapper[4772]: I1122 12:19:49.596280 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d5f74986f-pcncb" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.114:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.114:8080: connect: connection refused" Nov 22 12:19:49 crc kubenswrapper[4772]: I1122 12:19:49.597208 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.533859 4772 generic.go:334] "Generic (PLEG): container finished" podID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerID="fdc8bc4a43fa25c2450e214bdf10d0cb5fc7fee817c6622e9c94a6fd45dd8bb9" exitCode=137 Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.536097 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d5f74986f-pcncb" event={"ID":"51ea1ae0-b297-409e-829f-4f33a81969f7","Type":"ContainerDied","Data":"fdc8bc4a43fa25c2450e214bdf10d0cb5fc7fee817c6622e9c94a6fd45dd8bb9"} Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.694392 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.826234 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ea1ae0-b297-409e-829f-4f33a81969f7-horizon-secret-key\") pod \"51ea1ae0-b297-409e-829f-4f33a81969f7\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.826284 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-config-data\") pod \"51ea1ae0-b297-409e-829f-4f33a81969f7\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.826343 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-scripts\") pod \"51ea1ae0-b297-409e-829f-4f33a81969f7\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.826382 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x42js\" (UniqueName: \"kubernetes.io/projected/51ea1ae0-b297-409e-829f-4f33a81969f7-kube-api-access-x42js\") pod \"51ea1ae0-b297-409e-829f-4f33a81969f7\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.826452 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ea1ae0-b297-409e-829f-4f33a81969f7-logs\") pod \"51ea1ae0-b297-409e-829f-4f33a81969f7\" (UID: \"51ea1ae0-b297-409e-829f-4f33a81969f7\") " Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.828394 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ea1ae0-b297-409e-829f-4f33a81969f7-logs" (OuterVolumeSpecName: "logs") pod "51ea1ae0-b297-409e-829f-4f33a81969f7" (UID: "51ea1ae0-b297-409e-829f-4f33a81969f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.835624 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ea1ae0-b297-409e-829f-4f33a81969f7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "51ea1ae0-b297-409e-829f-4f33a81969f7" (UID: "51ea1ae0-b297-409e-829f-4f33a81969f7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.838232 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ea1ae0-b297-409e-829f-4f33a81969f7-kube-api-access-x42js" (OuterVolumeSpecName: "kube-api-access-x42js") pod "51ea1ae0-b297-409e-829f-4f33a81969f7" (UID: "51ea1ae0-b297-409e-829f-4f33a81969f7"). InnerVolumeSpecName "kube-api-access-x42js". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.852200 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-scripts" (OuterVolumeSpecName: "scripts") pod "51ea1ae0-b297-409e-829f-4f33a81969f7" (UID: "51ea1ae0-b297-409e-829f-4f33a81969f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.857651 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-config-data" (OuterVolumeSpecName: "config-data") pod "51ea1ae0-b297-409e-829f-4f33a81969f7" (UID: "51ea1ae0-b297-409e-829f-4f33a81969f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.929021 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ea1ae0-b297-409e-829f-4f33a81969f7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.929082 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.929092 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ea1ae0-b297-409e-829f-4f33a81969f7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.929103 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x42js\" (UniqueName: \"kubernetes.io/projected/51ea1ae0-b297-409e-829f-4f33a81969f7-kube-api-access-x42js\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:54 crc kubenswrapper[4772]: I1122 12:19:54.929113 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ea1ae0-b297-409e-829f-4f33a81969f7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:19:55 crc kubenswrapper[4772]: I1122 12:19:55.549861 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d5f74986f-pcncb" event={"ID":"51ea1ae0-b297-409e-829f-4f33a81969f7","Type":"ContainerDied","Data":"c62812fe75fcf4efe131c0c1013a2029e29fe8c892a5bfd990b2ee2808d8b0d1"} Nov 22 12:19:55 crc kubenswrapper[4772]: I1122 12:19:55.549935 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d5f74986f-pcncb" Nov 22 12:19:55 crc kubenswrapper[4772]: I1122 12:19:55.549963 4772 scope.go:117] "RemoveContainer" containerID="69293c9df5c2f8b2e7d541fb5e4c2dc058c39b4e2b3bf7a300e083eb250624f0" Nov 22 12:19:55 crc kubenswrapper[4772]: I1122 12:19:55.595583 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d5f74986f-pcncb"] Nov 22 12:19:55 crc kubenswrapper[4772]: I1122 12:19:55.617392 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d5f74986f-pcncb"] Nov 22 12:19:55 crc kubenswrapper[4772]: I1122 12:19:55.827210 4772 scope.go:117] "RemoveContainer" containerID="fdc8bc4a43fa25c2450e214bdf10d0cb5fc7fee817c6622e9c94a6fd45dd8bb9" Nov 22 12:19:57 crc kubenswrapper[4772]: I1122 12:19:57.433956 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" path="/var/lib/kubelet/pods/51ea1ae0-b297-409e-829f-4f33a81969f7/volumes" Nov 22 12:20:03 crc kubenswrapper[4772]: I1122 12:20:03.066941 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wmfbw"] Nov 22 12:20:03 crc kubenswrapper[4772]: I1122 12:20:03.085878 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wmfbw"] Nov 22 12:20:03 crc kubenswrapper[4772]: I1122 12:20:03.429621 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01364cc2-bc24-4fb4-b15c-fc8c0f709dc6" path="/var/lib/kubelet/pods/01364cc2-bc24-4fb4-b15c-fc8c0f709dc6/volumes" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.054572 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5499597ffc-9lp9r"] Nov 22 12:20:07 crc kubenswrapper[4772]: E1122 12:20:07.055824 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.055845 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" Nov 22 12:20:07 crc kubenswrapper[4772]: E1122 12:20:07.055873 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon-log" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.055882 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon-log" Nov 22 12:20:07 crc kubenswrapper[4772]: E1122 12:20:07.055896 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon-log" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.055904 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon-log" Nov 22 12:20:07 crc kubenswrapper[4772]: E1122 12:20:07.055931 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.055939 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.056252 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon-log" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.056265 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.056283 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b524f31a-4a4d-4d71-b51f-5e438b5e183c" containerName="horizon" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.056304 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ea1ae0-b297-409e-829f-4f33a81969f7" containerName="horizon-log" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.057875 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.071963 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5499597ffc-9lp9r"] Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.193279 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-logs\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.193424 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-horizon-secret-key\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.193478 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-config-data\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.193547 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7tfd\" (UniqueName: \"kubernetes.io/projected/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-kube-api-access-r7tfd\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.193593 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-scripts\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.296031 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-config-data\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.296155 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7tfd\" (UniqueName: \"kubernetes.io/projected/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-kube-api-access-r7tfd\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.296195 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-scripts\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.296257 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-logs\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.296320 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-horizon-secret-key\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.297202 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-scripts\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.297375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-logs\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.297807 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-config-data\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.316884 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-horizon-secret-key\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.321740 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7tfd\" (UniqueName: \"kubernetes.io/projected/35e8a6fd-fc93-4f4f-b4b4-849665217dc3-kube-api-access-r7tfd\") pod \"horizon-5499597ffc-9lp9r\" (UID: \"35e8a6fd-fc93-4f4f-b4b4-849665217dc3\") " pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:07 crc kubenswrapper[4772]: I1122 12:20:07.387942 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.057865 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5499597ffc-9lp9r"] Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.300197 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-kv7sj"] Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.302361 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kv7sj" Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.338938 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kv7sj"] Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.426135 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khw7\" (UniqueName: \"kubernetes.io/projected/401a4db8-bda5-4a1c-a4b1-6f054baa0f0e-kube-api-access-5khw7\") pod \"heat-db-create-kv7sj\" (UID: \"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e\") " pod="openstack/heat-db-create-kv7sj" Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.528682 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5khw7\" (UniqueName: \"kubernetes.io/projected/401a4db8-bda5-4a1c-a4b1-6f054baa0f0e-kube-api-access-5khw7\") pod \"heat-db-create-kv7sj\" (UID: \"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e\") " pod="openstack/heat-db-create-kv7sj" Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.557084 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5khw7\" (UniqueName: \"kubernetes.io/projected/401a4db8-bda5-4a1c-a4b1-6f054baa0f0e-kube-api-access-5khw7\") pod \"heat-db-create-kv7sj\" (UID: \"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e\") " pod="openstack/heat-db-create-kv7sj" Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.672110 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kv7sj" Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.770265 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5499597ffc-9lp9r" event={"ID":"35e8a6fd-fc93-4f4f-b4b4-849665217dc3","Type":"ContainerStarted","Data":"8d91b7822fd1e439d51201f4cfb7abe26e458ce41124677f2ace48cc288621df"} Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.770864 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5499597ffc-9lp9r" event={"ID":"35e8a6fd-fc93-4f4f-b4b4-849665217dc3","Type":"ContainerStarted","Data":"dd130ad4be4dcc7cb172369c75396d8af7b73cb578d2c2ddc40347b67982d465"} Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.770918 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5499597ffc-9lp9r" event={"ID":"35e8a6fd-fc93-4f4f-b4b4-849665217dc3","Type":"ContainerStarted","Data":"6947952ab7d0864eda28b8b0ec7a54dce9ac0511158b19ebfe885787faecba47"} Nov 22 12:20:08 crc kubenswrapper[4772]: I1122 12:20:08.806103 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5499597ffc-9lp9r" podStartSLOduration=1.8060759659999999 podStartE2EDuration="1.806075966s" podCreationTimestamp="2025-11-22 12:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:20:08.79094388 +0000 UTC m=+6129.030388374" watchObservedRunningTime="2025-11-22 12:20:08.806075966 +0000 UTC m=+6129.045520480" Nov 22 12:20:09 crc kubenswrapper[4772]: I1122 12:20:09.281312 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-kv7sj"] Nov 22 12:20:09 crc kubenswrapper[4772]: W1122 12:20:09.284903 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod401a4db8_bda5_4a1c_a4b1_6f054baa0f0e.slice/crio-05b366eaab2a86a3a3f87cee073b7af25372a02cb36e54ef594db092023d9022 WatchSource:0}: Error finding container 05b366eaab2a86a3a3f87cee073b7af25372a02cb36e54ef594db092023d9022: Status 404 returned error can't find the container with id 05b366eaab2a86a3a3f87cee073b7af25372a02cb36e54ef594db092023d9022 Nov 22 12:20:09 crc kubenswrapper[4772]: I1122 12:20:09.782183 4772 generic.go:334] "Generic (PLEG): container finished" podID="401a4db8-bda5-4a1c-a4b1-6f054baa0f0e" containerID="5df4f8336d4ae5c7179cb4bb886bd4ef68c6f683fb446f7c13985af4b4423647" exitCode=0 Nov 22 12:20:09 crc kubenswrapper[4772]: I1122 12:20:09.782601 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kv7sj" event={"ID":"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e","Type":"ContainerDied","Data":"5df4f8336d4ae5c7179cb4bb886bd4ef68c6f683fb446f7c13985af4b4423647"} Nov 22 12:20:09 crc kubenswrapper[4772]: I1122 12:20:09.783811 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kv7sj" event={"ID":"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e","Type":"ContainerStarted","Data":"05b366eaab2a86a3a3f87cee073b7af25372a02cb36e54ef594db092023d9022"} Nov 22 12:20:11 crc kubenswrapper[4772]: I1122 12:20:11.254117 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kv7sj" Nov 22 12:20:11 crc kubenswrapper[4772]: I1122 12:20:11.414639 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5khw7\" (UniqueName: \"kubernetes.io/projected/401a4db8-bda5-4a1c-a4b1-6f054baa0f0e-kube-api-access-5khw7\") pod \"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e\" (UID: \"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e\") " Nov 22 12:20:11 crc kubenswrapper[4772]: I1122 12:20:11.427764 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401a4db8-bda5-4a1c-a4b1-6f054baa0f0e-kube-api-access-5khw7" (OuterVolumeSpecName: "kube-api-access-5khw7") pod "401a4db8-bda5-4a1c-a4b1-6f054baa0f0e" (UID: "401a4db8-bda5-4a1c-a4b1-6f054baa0f0e"). InnerVolumeSpecName "kube-api-access-5khw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:20:11 crc kubenswrapper[4772]: I1122 12:20:11.519604 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5khw7\" (UniqueName: \"kubernetes.io/projected/401a4db8-bda5-4a1c-a4b1-6f054baa0f0e-kube-api-access-5khw7\") on node \"crc\" DevicePath \"\"" Nov 22 12:20:11 crc kubenswrapper[4772]: I1122 12:20:11.814696 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-kv7sj" event={"ID":"401a4db8-bda5-4a1c-a4b1-6f054baa0f0e","Type":"ContainerDied","Data":"05b366eaab2a86a3a3f87cee073b7af25372a02cb36e54ef594db092023d9022"} Nov 22 12:20:11 crc kubenswrapper[4772]: I1122 12:20:11.814742 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-kv7sj" Nov 22 12:20:11 crc kubenswrapper[4772]: I1122 12:20:11.814780 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05b366eaab2a86a3a3f87cee073b7af25372a02cb36e54ef594db092023d9022" Nov 22 12:20:13 crc kubenswrapper[4772]: I1122 12:20:13.534731 4772 scope.go:117] "RemoveContainer" containerID="b5ec57191c0978c3d507d4baff556d34681508a4103511d06ee0149ed36f3693" Nov 22 12:20:13 crc kubenswrapper[4772]: I1122 12:20:13.566745 4772 scope.go:117] "RemoveContainer" containerID="cfb4d65ba1f710d3c02ef992a8ddc9b0596ad90f539deb9a08e33bcb852265a0" Nov 22 12:20:13 crc kubenswrapper[4772]: I1122 12:20:13.633284 4772 scope.go:117] "RemoveContainer" containerID="90d5a482854ca47383e3ed588f8d8cc1a5d6aef0ee906374e7f90400c2dc5543" Nov 22 12:20:13 crc kubenswrapper[4772]: I1122 12:20:13.679873 4772 scope.go:117] "RemoveContainer" containerID="7a372b7aa4f7b48ca0760c91e0e7cbb3f6158ba77a0a964058c1fcc4ef9177bd" Nov 22 12:20:14 crc kubenswrapper[4772]: I1122 12:20:14.051730 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a5e8-account-create-dngf4"] Nov 22 12:20:14 crc kubenswrapper[4772]: I1122 12:20:14.070063 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a5e8-account-create-dngf4"] Nov 22 12:20:15 crc kubenswrapper[4772]: I1122 12:20:15.428668 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd" path="/var/lib/kubelet/pods/9f8e6f03-ebe6-4b21-941e-5b12d32dc3fd/volumes" Nov 22 12:20:17 crc kubenswrapper[4772]: I1122 12:20:17.388341 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:17 crc kubenswrapper[4772]: I1122 12:20:17.389142 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.376482 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-91be-account-create-nj6nz"] Nov 22 12:20:18 crc kubenswrapper[4772]: E1122 12:20:18.377499 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401a4db8-bda5-4a1c-a4b1-6f054baa0f0e" containerName="mariadb-database-create" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.377523 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="401a4db8-bda5-4a1c-a4b1-6f054baa0f0e" containerName="mariadb-database-create" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.377768 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="401a4db8-bda5-4a1c-a4b1-6f054baa0f0e" containerName="mariadb-database-create" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.378582 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-91be-account-create-nj6nz" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.381265 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.426205 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-91be-account-create-nj6nz"] Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.491891 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxd5\" (UniqueName: \"kubernetes.io/projected/79510b19-44ef-40ed-8f20-0f8f43673e6b-kube-api-access-hrxd5\") pod \"heat-91be-account-create-nj6nz\" (UID: \"79510b19-44ef-40ed-8f20-0f8f43673e6b\") " pod="openstack/heat-91be-account-create-nj6nz" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.594351 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrxd5\" (UniqueName: \"kubernetes.io/projected/79510b19-44ef-40ed-8f20-0f8f43673e6b-kube-api-access-hrxd5\") pod \"heat-91be-account-create-nj6nz\" (UID: \"79510b19-44ef-40ed-8f20-0f8f43673e6b\") " pod="openstack/heat-91be-account-create-nj6nz" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.614829 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrxd5\" (UniqueName: \"kubernetes.io/projected/79510b19-44ef-40ed-8f20-0f8f43673e6b-kube-api-access-hrxd5\") pod \"heat-91be-account-create-nj6nz\" (UID: \"79510b19-44ef-40ed-8f20-0f8f43673e6b\") " pod="openstack/heat-91be-account-create-nj6nz" Nov 22 12:20:18 crc kubenswrapper[4772]: I1122 12:20:18.742988 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-91be-account-create-nj6nz" Nov 22 12:20:19 crc kubenswrapper[4772]: I1122 12:20:19.222201 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-91be-account-create-nj6nz"] Nov 22 12:20:19 crc kubenswrapper[4772]: I1122 12:20:19.904578 4772 generic.go:334] "Generic (PLEG): container finished" podID="79510b19-44ef-40ed-8f20-0f8f43673e6b" containerID="9a5560179fa9863628d10042c248d514748e422ba4f83d4a355fae87943d3f44" exitCode=0 Nov 22 12:20:19 crc kubenswrapper[4772]: I1122 12:20:19.905431 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-91be-account-create-nj6nz" event={"ID":"79510b19-44ef-40ed-8f20-0f8f43673e6b","Type":"ContainerDied","Data":"9a5560179fa9863628d10042c248d514748e422ba4f83d4a355fae87943d3f44"} Nov 22 12:20:19 crc kubenswrapper[4772]: I1122 12:20:19.905544 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-91be-account-create-nj6nz" event={"ID":"79510b19-44ef-40ed-8f20-0f8f43673e6b","Type":"ContainerStarted","Data":"77547d383431dd3e03e18db6f92c9fefda39280980ae6e8b64a5038ca8d01636"} Nov 22 12:20:20 crc kubenswrapper[4772]: I1122 12:20:20.033230 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-z5g94"] Nov 22 12:20:20 crc kubenswrapper[4772]: I1122 12:20:20.041766 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-z5g94"] Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.372423 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-91be-account-create-nj6nz" Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.427952 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="003f5d26-27bc-427b-b2a9-10a4d1b6ffba" path="/var/lib/kubelet/pods/003f5d26-27bc-427b-b2a9-10a4d1b6ffba/volumes" Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.480277 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrxd5\" (UniqueName: \"kubernetes.io/projected/79510b19-44ef-40ed-8f20-0f8f43673e6b-kube-api-access-hrxd5\") pod \"79510b19-44ef-40ed-8f20-0f8f43673e6b\" (UID: \"79510b19-44ef-40ed-8f20-0f8f43673e6b\") " Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.486934 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79510b19-44ef-40ed-8f20-0f8f43673e6b-kube-api-access-hrxd5" (OuterVolumeSpecName: "kube-api-access-hrxd5") pod "79510b19-44ef-40ed-8f20-0f8f43673e6b" (UID: "79510b19-44ef-40ed-8f20-0f8f43673e6b"). InnerVolumeSpecName "kube-api-access-hrxd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.583177 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrxd5\" (UniqueName: \"kubernetes.io/projected/79510b19-44ef-40ed-8f20-0f8f43673e6b-kube-api-access-hrxd5\") on node \"crc\" DevicePath \"\"" Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.932434 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-91be-account-create-nj6nz" event={"ID":"79510b19-44ef-40ed-8f20-0f8f43673e6b","Type":"ContainerDied","Data":"77547d383431dd3e03e18db6f92c9fefda39280980ae6e8b64a5038ca8d01636"} Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.933293 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77547d383431dd3e03e18db6f92c9fefda39280980ae6e8b64a5038ca8d01636" Nov 22 12:20:21 crc kubenswrapper[4772]: I1122 12:20:21.932529 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-91be-account-create-nj6nz" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.453486 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-8cjvn"] Nov 22 12:20:23 crc kubenswrapper[4772]: E1122 12:20:23.454072 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79510b19-44ef-40ed-8f20-0f8f43673e6b" containerName="mariadb-account-create" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.454088 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="79510b19-44ef-40ed-8f20-0f8f43673e6b" containerName="mariadb-account-create" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.454388 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="79510b19-44ef-40ed-8f20-0f8f43673e6b" containerName="mariadb-account-create" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.455259 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.461899 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-bppkk" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.462183 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.476259 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-8cjvn"] Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.535880 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-config-data\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.535954 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-combined-ca-bundle\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.536010 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8jl\" (UniqueName: \"kubernetes.io/projected/de9f7e09-5900-441f-b877-40f36f5eaa50-kube-api-access-5v8jl\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.638007 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v8jl\" (UniqueName: \"kubernetes.io/projected/de9f7e09-5900-441f-b877-40f36f5eaa50-kube-api-access-5v8jl\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.638645 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-config-data\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.638676 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-combined-ca-bundle\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.647427 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-combined-ca-bundle\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.656088 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-config-data\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.657421 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v8jl\" (UniqueName: \"kubernetes.io/projected/de9f7e09-5900-441f-b877-40f36f5eaa50-kube-api-access-5v8jl\") pod \"heat-db-sync-8cjvn\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:23 crc kubenswrapper[4772]: I1122 12:20:23.786226 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:24 crc kubenswrapper[4772]: I1122 12:20:24.303881 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-8cjvn"] Nov 22 12:20:24 crc kubenswrapper[4772]: I1122 12:20:24.964439 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8cjvn" event={"ID":"de9f7e09-5900-441f-b877-40f36f5eaa50","Type":"ContainerStarted","Data":"041db57e4cfc596bb0830a962526d774bf50669edb377899bf8761a97ca995b5"} Nov 22 12:20:29 crc kubenswrapper[4772]: I1122 12:20:29.210476 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:30 crc kubenswrapper[4772]: I1122 12:20:30.959571 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5499597ffc-9lp9r" Nov 22 12:20:31 crc kubenswrapper[4772]: I1122 12:20:31.051171 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d58fc79d5-75vz8"] Nov 22 12:20:31 crc kubenswrapper[4772]: I1122 12:20:31.052015 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d58fc79d5-75vz8" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon-log" containerID="cri-o://b01906114a25f42e086e4e48566f34890dd0a8bff8b33c4ed6038332c0f4f005" gracePeriod=30 Nov 22 12:20:31 crc kubenswrapper[4772]: I1122 12:20:31.052406 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d58fc79d5-75vz8" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" containerID="cri-o://5fb3b1dfc8fac4b6da766f35bb69563c8d04f239c1595ae9cdc641f3a2ec3654" gracePeriod=30 Nov 22 12:20:31 crc kubenswrapper[4772]: I1122 12:20:31.070285 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8cjvn" event={"ID":"de9f7e09-5900-441f-b877-40f36f5eaa50","Type":"ContainerStarted","Data":"a13eb4f77eaa899b929e0e319953286ac76c13f5f706b402565653b5501c87fd"} Nov 22 12:20:31 crc kubenswrapper[4772]: I1122 12:20:31.109786 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-8cjvn" podStartSLOduration=2.049461832 podStartE2EDuration="8.109762981s" podCreationTimestamp="2025-11-22 12:20:23 +0000 UTC" firstStartedPulling="2025-11-22 12:20:24.316323749 +0000 UTC m=+6144.555768233" lastFinishedPulling="2025-11-22 12:20:30.376624898 +0000 UTC m=+6150.616069382" observedRunningTime="2025-11-22 12:20:31.099440884 +0000 UTC m=+6151.338885378" watchObservedRunningTime="2025-11-22 12:20:31.109762981 +0000 UTC m=+6151.349207485" Nov 22 12:20:34 crc kubenswrapper[4772]: I1122 12:20:34.109186 4772 generic.go:334] "Generic (PLEG): container finished" podID="de9f7e09-5900-441f-b877-40f36f5eaa50" containerID="a13eb4f77eaa899b929e0e319953286ac76c13f5f706b402565653b5501c87fd" exitCode=0 Nov 22 12:20:34 crc kubenswrapper[4772]: I1122 12:20:34.109302 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8cjvn" event={"ID":"de9f7e09-5900-441f-b877-40f36f5eaa50","Type":"ContainerDied","Data":"a13eb4f77eaa899b929e0e319953286ac76c13f5f706b402565653b5501c87fd"} Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.129556 4772 generic.go:334] "Generic (PLEG): container finished" podID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerID="5fb3b1dfc8fac4b6da766f35bb69563c8d04f239c1595ae9cdc641f3a2ec3654" exitCode=0 Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.129657 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d58fc79d5-75vz8" event={"ID":"fe3609b5-6832-4242-8b90-bcc0c15544b5","Type":"ContainerDied","Data":"5fb3b1dfc8fac4b6da766f35bb69563c8d04f239c1595ae9cdc641f3a2ec3654"} Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.609335 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.659447 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-config-data\") pod \"de9f7e09-5900-441f-b877-40f36f5eaa50\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.659826 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-combined-ca-bundle\") pod \"de9f7e09-5900-441f-b877-40f36f5eaa50\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.660102 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v8jl\" (UniqueName: \"kubernetes.io/projected/de9f7e09-5900-441f-b877-40f36f5eaa50-kube-api-access-5v8jl\") pod \"de9f7e09-5900-441f-b877-40f36f5eaa50\" (UID: \"de9f7e09-5900-441f-b877-40f36f5eaa50\") " Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.669111 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9f7e09-5900-441f-b877-40f36f5eaa50-kube-api-access-5v8jl" (OuterVolumeSpecName: "kube-api-access-5v8jl") pod "de9f7e09-5900-441f-b877-40f36f5eaa50" (UID: "de9f7e09-5900-441f-b877-40f36f5eaa50"). InnerVolumeSpecName "kube-api-access-5v8jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.699400 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de9f7e09-5900-441f-b877-40f36f5eaa50" (UID: "de9f7e09-5900-441f-b877-40f36f5eaa50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.755300 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-config-data" (OuterVolumeSpecName: "config-data") pod "de9f7e09-5900-441f-b877-40f36f5eaa50" (UID: "de9f7e09-5900-441f-b877-40f36f5eaa50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.763634 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v8jl\" (UniqueName: \"kubernetes.io/projected/de9f7e09-5900-441f-b877-40f36f5eaa50-kube-api-access-5v8jl\") on node \"crc\" DevicePath \"\"" Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.763689 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:20:35 crc kubenswrapper[4772]: I1122 12:20:35.763707 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f7e09-5900-441f-b877-40f36f5eaa50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:20:36 crc kubenswrapper[4772]: I1122 12:20:36.142283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-8cjvn" event={"ID":"de9f7e09-5900-441f-b877-40f36f5eaa50","Type":"ContainerDied","Data":"041db57e4cfc596bb0830a962526d774bf50669edb377899bf8761a97ca995b5"} Nov 22 12:20:36 crc kubenswrapper[4772]: I1122 12:20:36.142344 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041db57e4cfc596bb0830a962526d774bf50669edb377899bf8761a97ca995b5" Nov 22 12:20:36 crc kubenswrapper[4772]: I1122 12:20:36.142446 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-8cjvn" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.519639 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6958597d57-z4ml8"] Nov 22 12:20:37 crc kubenswrapper[4772]: E1122 12:20:37.520742 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9f7e09-5900-441f-b877-40f36f5eaa50" containerName="heat-db-sync" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.520760 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9f7e09-5900-441f-b877-40f36f5eaa50" containerName="heat-db-sync" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.520983 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9f7e09-5900-441f-b877-40f36f5eaa50" containerName="heat-db-sync" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.525521 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.533246 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-bppkk" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.533325 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.550626 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.575945 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6958597d57-z4ml8"] Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.629515 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vs8\" (UniqueName: \"kubernetes.io/projected/c2bdccd8-3230-4ade-828e-baed9abe01de-kube-api-access-57vs8\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.629645 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-config-data\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.629669 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-config-data-custom\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.629685 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-combined-ca-bundle\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.686980 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f5dddbfd-ssd79"] Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.688594 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.702117 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f5dddbfd-ssd79"] Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.716275 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.731250 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57vs8\" (UniqueName: \"kubernetes.io/projected/c2bdccd8-3230-4ade-828e-baed9abe01de-kube-api-access-57vs8\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.731395 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-config-data\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.731423 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-config-data-custom\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.731436 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-combined-ca-bundle\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.756325 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5b6f944f87-bdzd6"] Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.757912 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.773909 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.776994 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57vs8\" (UniqueName: \"kubernetes.io/projected/c2bdccd8-3230-4ade-828e-baed9abe01de-kube-api-access-57vs8\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.785096 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-config-data-custom\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.792431 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b6f944f87-bdzd6"] Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.796352 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-config-data\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.812417 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bdccd8-3230-4ade-828e-baed9abe01de-combined-ca-bundle\") pod \"heat-engine-6958597d57-z4ml8\" (UID: \"c2bdccd8-3230-4ade-828e-baed9abe01de\") " pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.837582 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-config-data-custom\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.848265 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-combined-ca-bundle\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.848526 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7x6c\" (UniqueName: \"kubernetes.io/projected/6be9573b-55de-4e3c-892e-246b9d85269e-kube-api-access-t7x6c\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.848638 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-config-data-custom\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.848712 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-config-data\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.848997 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-config-data\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.849152 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnwl\" (UniqueName: \"kubernetes.io/projected/e7285a22-1d2c-48d8-87cd-fb7e47904824-kube-api-access-2vnwl\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.849275 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-combined-ca-bundle\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.918123 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952074 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-config-data-custom\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952153 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-combined-ca-bundle\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952179 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7x6c\" (UniqueName: \"kubernetes.io/projected/6be9573b-55de-4e3c-892e-246b9d85269e-kube-api-access-t7x6c\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952217 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-config-data\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952246 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-config-data-custom\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952325 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-config-data\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952372 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnwl\" (UniqueName: \"kubernetes.io/projected/e7285a22-1d2c-48d8-87cd-fb7e47904824-kube-api-access-2vnwl\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.952417 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-combined-ca-bundle\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.961955 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-combined-ca-bundle\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.962587 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-config-data\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.963558 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-config-data-custom\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.966311 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7285a22-1d2c-48d8-87cd-fb7e47904824-config-data-custom\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.974544 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-config-data\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.976684 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnwl\" (UniqueName: \"kubernetes.io/projected/e7285a22-1d2c-48d8-87cd-fb7e47904824-kube-api-access-2vnwl\") pod \"heat-api-5b6f944f87-bdzd6\" (UID: \"e7285a22-1d2c-48d8-87cd-fb7e47904824\") " pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.987189 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7x6c\" (UniqueName: \"kubernetes.io/projected/6be9573b-55de-4e3c-892e-246b9d85269e-kube-api-access-t7x6c\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:37 crc kubenswrapper[4772]: I1122 12:20:37.988865 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be9573b-55de-4e3c-892e-246b9d85269e-combined-ca-bundle\") pod \"heat-cfnapi-6f5dddbfd-ssd79\" (UID: \"6be9573b-55de-4e3c-892e-246b9d85269e\") " pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:38 crc kubenswrapper[4772]: I1122 12:20:38.073625 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:38 crc kubenswrapper[4772]: I1122 12:20:38.201103 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:38 crc kubenswrapper[4772]: I1122 12:20:38.488234 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6958597d57-z4ml8"] Nov 22 12:20:38 crc kubenswrapper[4772]: I1122 12:20:38.645856 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f5dddbfd-ssd79"] Nov 22 12:20:38 crc kubenswrapper[4772]: I1122 12:20:38.876414 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b6f944f87-bdzd6"] Nov 22 12:20:39 crc kubenswrapper[4772]: I1122 12:20:39.197837 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b6f944f87-bdzd6" event={"ID":"e7285a22-1d2c-48d8-87cd-fb7e47904824","Type":"ContainerStarted","Data":"4afddbb3a253c3d404d6d76987d556c10e83a26a7a0706f43603fd4c7308d325"} Nov 22 12:20:39 crc kubenswrapper[4772]: I1122 12:20:39.199066 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" event={"ID":"6be9573b-55de-4e3c-892e-246b9d85269e","Type":"ContainerStarted","Data":"ce4e370ca7c65fbc006da52070efdeed51ad63d9cfb04209133bc91743a6f993"} Nov 22 12:20:39 crc kubenswrapper[4772]: I1122 12:20:39.200251 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6958597d57-z4ml8" event={"ID":"c2bdccd8-3230-4ade-828e-baed9abe01de","Type":"ContainerStarted","Data":"dfb420c28fbf289f487c285544fd7dd493855e93316ea6eed3117c7237a5d921"} Nov 22 12:20:39 crc kubenswrapper[4772]: I1122 12:20:39.200279 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6958597d57-z4ml8" event={"ID":"c2bdccd8-3230-4ade-828e-baed9abe01de","Type":"ContainerStarted","Data":"19c3244312104819ad852d256afed6dd21a6a987848c397d98b55e9aee944837"} Nov 22 12:20:39 crc kubenswrapper[4772]: I1122 12:20:39.200485 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:20:40 crc kubenswrapper[4772]: I1122 12:20:40.460588 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d58fc79d5-75vz8" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Nov 22 12:20:41 crc kubenswrapper[4772]: I1122 12:20:41.458826 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6958597d57-z4ml8" podStartSLOduration=4.458796641 podStartE2EDuration="4.458796641s" podCreationTimestamp="2025-11-22 12:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:20:39.235944877 +0000 UTC m=+6159.475389371" watchObservedRunningTime="2025-11-22 12:20:41.458796641 +0000 UTC m=+6161.698241125" Nov 22 12:20:42 crc kubenswrapper[4772]: I1122 12:20:42.238604 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b6f944f87-bdzd6" event={"ID":"e7285a22-1d2c-48d8-87cd-fb7e47904824","Type":"ContainerStarted","Data":"8e35ba39d55bc30136a059dac4a086a0996159575bb895df0d1fcc1c3a51258f"} Nov 22 12:20:42 crc kubenswrapper[4772]: I1122 12:20:42.239131 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:42 crc kubenswrapper[4772]: I1122 12:20:42.241233 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" event={"ID":"6be9573b-55de-4e3c-892e-246b9d85269e","Type":"ContainerStarted","Data":"281e9c26fd51fd722c2cb533f2d1c07cd273616198c86e60491557e4c65bc846"} Nov 22 12:20:42 crc kubenswrapper[4772]: I1122 12:20:42.241480 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:42 crc kubenswrapper[4772]: I1122 12:20:42.255336 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5b6f944f87-bdzd6" podStartSLOduration=3.123354781 podStartE2EDuration="5.255309842s" podCreationTimestamp="2025-11-22 12:20:37 +0000 UTC" firstStartedPulling="2025-11-22 12:20:38.880277061 +0000 UTC m=+6159.119721555" lastFinishedPulling="2025-11-22 12:20:41.012232122 +0000 UTC m=+6161.251676616" observedRunningTime="2025-11-22 12:20:42.252734488 +0000 UTC m=+6162.492178982" watchObservedRunningTime="2025-11-22 12:20:42.255309842 +0000 UTC m=+6162.494754326" Nov 22 12:20:42 crc kubenswrapper[4772]: I1122 12:20:42.280504 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" podStartSLOduration=2.956344743 podStartE2EDuration="5.280486479s" podCreationTimestamp="2025-11-22 12:20:37 +0000 UTC" firstStartedPulling="2025-11-22 12:20:38.680006445 +0000 UTC m=+6158.919450949" lastFinishedPulling="2025-11-22 12:20:41.004148191 +0000 UTC m=+6161.243592685" observedRunningTime="2025-11-22 12:20:42.276145101 +0000 UTC m=+6162.515589595" watchObservedRunningTime="2025-11-22 12:20:42.280486479 +0000 UTC m=+6162.519930973" Nov 22 12:20:49 crc kubenswrapper[4772]: I1122 12:20:49.398033 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6f5dddbfd-ssd79" Nov 22 12:20:49 crc kubenswrapper[4772]: I1122 12:20:49.842212 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5b6f944f87-bdzd6" Nov 22 12:20:50 crc kubenswrapper[4772]: I1122 12:20:50.458301 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d58fc79d5-75vz8" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Nov 22 12:20:57 crc kubenswrapper[4772]: I1122 12:20:57.973602 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6958597d57-z4ml8" Nov 22 12:21:00 crc kubenswrapper[4772]: I1122 12:21:00.457772 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d58fc79d5-75vz8" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Nov 22 12:21:00 crc kubenswrapper[4772]: I1122 12:21:00.458517 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.471615 4772 generic.go:334] "Generic (PLEG): container finished" podID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerID="b01906114a25f42e086e4e48566f34890dd0a8bff8b33c4ed6038332c0f4f005" exitCode=137 Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.471694 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d58fc79d5-75vz8" event={"ID":"fe3609b5-6832-4242-8b90-bcc0c15544b5","Type":"ContainerDied","Data":"b01906114a25f42e086e4e48566f34890dd0a8bff8b33c4ed6038332c0f4f005"} Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.621777 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.701488 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-scripts\") pod \"fe3609b5-6832-4242-8b90-bcc0c15544b5\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.701547 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe3609b5-6832-4242-8b90-bcc0c15544b5-horizon-secret-key\") pod \"fe3609b5-6832-4242-8b90-bcc0c15544b5\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.701680 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3609b5-6832-4242-8b90-bcc0c15544b5-logs\") pod \"fe3609b5-6832-4242-8b90-bcc0c15544b5\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.701873 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-config-data\") pod \"fe3609b5-6832-4242-8b90-bcc0c15544b5\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.701913 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkbjf\" (UniqueName: \"kubernetes.io/projected/fe3609b5-6832-4242-8b90-bcc0c15544b5-kube-api-access-bkbjf\") pod \"fe3609b5-6832-4242-8b90-bcc0c15544b5\" (UID: \"fe3609b5-6832-4242-8b90-bcc0c15544b5\") " Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.703319 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3609b5-6832-4242-8b90-bcc0c15544b5-logs" (OuterVolumeSpecName: "logs") pod "fe3609b5-6832-4242-8b90-bcc0c15544b5" (UID: "fe3609b5-6832-4242-8b90-bcc0c15544b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.724451 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3609b5-6832-4242-8b90-bcc0c15544b5-kube-api-access-bkbjf" (OuterVolumeSpecName: "kube-api-access-bkbjf") pod "fe3609b5-6832-4242-8b90-bcc0c15544b5" (UID: "fe3609b5-6832-4242-8b90-bcc0c15544b5"). InnerVolumeSpecName "kube-api-access-bkbjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.729360 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3609b5-6832-4242-8b90-bcc0c15544b5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fe3609b5-6832-4242-8b90-bcc0c15544b5" (UID: "fe3609b5-6832-4242-8b90-bcc0c15544b5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.757180 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-scripts" (OuterVolumeSpecName: "scripts") pod "fe3609b5-6832-4242-8b90-bcc0c15544b5" (UID: "fe3609b5-6832-4242-8b90-bcc0c15544b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.762635 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-config-data" (OuterVolumeSpecName: "config-data") pod "fe3609b5-6832-4242-8b90-bcc0c15544b5" (UID: "fe3609b5-6832-4242-8b90-bcc0c15544b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.805867 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.805918 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkbjf\" (UniqueName: \"kubernetes.io/projected/fe3609b5-6832-4242-8b90-bcc0c15544b5-kube-api-access-bkbjf\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.805941 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe3609b5-6832-4242-8b90-bcc0c15544b5-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.805961 4772 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe3609b5-6832-4242-8b90-bcc0c15544b5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:01 crc kubenswrapper[4772]: I1122 12:21:01.805981 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3609b5-6832-4242-8b90-bcc0c15544b5-logs\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:02 crc kubenswrapper[4772]: I1122 12:21:02.491853 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d58fc79d5-75vz8" event={"ID":"fe3609b5-6832-4242-8b90-bcc0c15544b5","Type":"ContainerDied","Data":"c480fa91bfe4ce838c0121e9cdf6ca0049633ac4092cdbd7b09c1c9e85160cf1"} Nov 22 12:21:02 crc kubenswrapper[4772]: I1122 12:21:02.492457 4772 scope.go:117] "RemoveContainer" containerID="5fb3b1dfc8fac4b6da766f35bb69563c8d04f239c1595ae9cdc641f3a2ec3654" Nov 22 12:21:02 crc kubenswrapper[4772]: I1122 12:21:02.492006 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d58fc79d5-75vz8" Nov 22 12:21:02 crc kubenswrapper[4772]: I1122 12:21:02.566650 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d58fc79d5-75vz8"] Nov 22 12:21:02 crc kubenswrapper[4772]: I1122 12:21:02.579341 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d58fc79d5-75vz8"] Nov 22 12:21:02 crc kubenswrapper[4772]: I1122 12:21:02.737999 4772 scope.go:117] "RemoveContainer" containerID="b01906114a25f42e086e4e48566f34890dd0a8bff8b33c4ed6038332c0f4f005" Nov 22 12:21:03 crc kubenswrapper[4772]: I1122 12:21:03.426673 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" path="/var/lib/kubelet/pods/fe3609b5-6832-4242-8b90-bcc0c15544b5/volumes" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.252845 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh"] Nov 22 12:21:07 crc kubenswrapper[4772]: E1122 12:21:07.254245 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon-log" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.254264 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon-log" Nov 22 12:21:07 crc kubenswrapper[4772]: E1122 12:21:07.254323 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.254331 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.254563 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon-log" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.254585 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3609b5-6832-4242-8b90-bcc0c15544b5" containerName="horizon" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.256299 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.258977 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.286192 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh"] Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.355975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.356522 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvclr\" (UniqueName: \"kubernetes.io/projected/e38b27e1-2a6a-4017-9fad-b2d172ce1662-kube-api-access-zvclr\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.356757 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.458840 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvclr\" (UniqueName: \"kubernetes.io/projected/e38b27e1-2a6a-4017-9fad-b2d172ce1662-kube-api-access-zvclr\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.459436 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.459538 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.460159 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.460202 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.494437 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvclr\" (UniqueName: \"kubernetes.io/projected/e38b27e1-2a6a-4017-9fad-b2d172ce1662-kube-api-access-zvclr\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:07 crc kubenswrapper[4772]: I1122 12:21:07.587165 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:08 crc kubenswrapper[4772]: I1122 12:21:08.215163 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh"] Nov 22 12:21:08 crc kubenswrapper[4772]: I1122 12:21:08.565599 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" event={"ID":"e38b27e1-2a6a-4017-9fad-b2d172ce1662","Type":"ContainerStarted","Data":"a8bcd043cce650624b07634a74b79883b92589b9095b839d21063834ee2980c4"} Nov 22 12:21:08 crc kubenswrapper[4772]: I1122 12:21:08.566006 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" event={"ID":"e38b27e1-2a6a-4017-9fad-b2d172ce1662","Type":"ContainerStarted","Data":"23a1a1e1ee4967b41c3603b7642d88434782ba0ccba67314060301bd1fc8620f"} Nov 22 12:21:09 crc kubenswrapper[4772]: I1122 12:21:09.576916 4772 generic.go:334] "Generic (PLEG): container finished" podID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerID="a8bcd043cce650624b07634a74b79883b92589b9095b839d21063834ee2980c4" exitCode=0 Nov 22 12:21:09 crc kubenswrapper[4772]: I1122 12:21:09.577022 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" event={"ID":"e38b27e1-2a6a-4017-9fad-b2d172ce1662","Type":"ContainerDied","Data":"a8bcd043cce650624b07634a74b79883b92589b9095b839d21063834ee2980c4"} Nov 22 12:21:11 crc kubenswrapper[4772]: I1122 12:21:11.602262 4772 generic.go:334] "Generic (PLEG): container finished" podID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerID="ffdcdb5ef76e12463cf7df1badcdbfffa68e230a0cbd15478ad683cc18ce20ba" exitCode=0 Nov 22 12:21:11 crc kubenswrapper[4772]: I1122 12:21:11.602347 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" event={"ID":"e38b27e1-2a6a-4017-9fad-b2d172ce1662","Type":"ContainerDied","Data":"ffdcdb5ef76e12463cf7df1badcdbfffa68e230a0cbd15478ad683cc18ce20ba"} Nov 22 12:21:12 crc kubenswrapper[4772]: I1122 12:21:12.623594 4772 generic.go:334] "Generic (PLEG): container finished" podID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerID="3a6ecdad0d9c51b7ce8963a68f25e2bbad7a9225a3fb08e773d374674369a8bd" exitCode=0 Nov 22 12:21:12 crc kubenswrapper[4772]: I1122 12:21:12.623618 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" event={"ID":"e38b27e1-2a6a-4017-9fad-b2d172ce1662","Type":"ContainerDied","Data":"3a6ecdad0d9c51b7ce8963a68f25e2bbad7a9225a3fb08e773d374674369a8bd"} Nov 22 12:21:13 crc kubenswrapper[4772]: I1122 12:21:13.903588 4772 scope.go:117] "RemoveContainer" containerID="da2c7526f15aa7a500e0d472faf4e9cc421c5444307180cded9811293a234f1c" Nov 22 12:21:13 crc kubenswrapper[4772]: I1122 12:21:13.949189 4772 scope.go:117] "RemoveContainer" containerID="6f535f11a04f83e03a4384ff6d7e47cddfbf059392cf9d62cd6bec4554e0776c" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.084634 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.137392 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvclr\" (UniqueName: \"kubernetes.io/projected/e38b27e1-2a6a-4017-9fad-b2d172ce1662-kube-api-access-zvclr\") pod \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.137789 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-bundle\") pod \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.138203 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-util\") pod \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\" (UID: \"e38b27e1-2a6a-4017-9fad-b2d172ce1662\") " Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.142567 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-bundle" (OuterVolumeSpecName: "bundle") pod "e38b27e1-2a6a-4017-9fad-b2d172ce1662" (UID: "e38b27e1-2a6a-4017-9fad-b2d172ce1662"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.149266 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38b27e1-2a6a-4017-9fad-b2d172ce1662-kube-api-access-zvclr" (OuterVolumeSpecName: "kube-api-access-zvclr") pod "e38b27e1-2a6a-4017-9fad-b2d172ce1662" (UID: "e38b27e1-2a6a-4017-9fad-b2d172ce1662"). InnerVolumeSpecName "kube-api-access-zvclr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.159439 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-util" (OuterVolumeSpecName: "util") pod "e38b27e1-2a6a-4017-9fad-b2d172ce1662" (UID: "e38b27e1-2a6a-4017-9fad-b2d172ce1662"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.241606 4772 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-util\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.241651 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvclr\" (UniqueName: \"kubernetes.io/projected/e38b27e1-2a6a-4017-9fad-b2d172ce1662-kube-api-access-zvclr\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.241670 4772 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e38b27e1-2a6a-4017-9fad-b2d172ce1662-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.657327 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" event={"ID":"e38b27e1-2a6a-4017-9fad-b2d172ce1662","Type":"ContainerDied","Data":"23a1a1e1ee4967b41c3603b7642d88434782ba0ccba67314060301bd1fc8620f"} Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.657879 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a1a1e1ee4967b41c3603b7642d88434782ba0ccba67314060301bd1fc8620f" Nov 22 12:21:14 crc kubenswrapper[4772]: I1122 12:21:14.657490 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh" Nov 22 12:21:18 crc kubenswrapper[4772]: I1122 12:21:18.053800 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hlhl9"] Nov 22 12:21:18 crc kubenswrapper[4772]: I1122 12:21:18.070736 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mfq9z"] Nov 22 12:21:18 crc kubenswrapper[4772]: I1122 12:21:18.085680 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hlhl9"] Nov 22 12:21:18 crc kubenswrapper[4772]: I1122 12:21:18.098758 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mfq9z"] Nov 22 12:21:19 crc kubenswrapper[4772]: I1122 12:21:19.042983 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xdm5h"] Nov 22 12:21:19 crc kubenswrapper[4772]: I1122 12:21:19.060112 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xdm5h"] Nov 22 12:21:19 crc kubenswrapper[4772]: I1122 12:21:19.434606 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03077780-f463-4f53-b015-90a5f8f956d9" path="/var/lib/kubelet/pods/03077780-f463-4f53-b015-90a5f8f956d9/volumes" Nov 22 12:21:19 crc kubenswrapper[4772]: I1122 12:21:19.436586 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41815add-6907-4973-a580-efa29ae5d5c9" path="/var/lib/kubelet/pods/41815add-6907-4973-a580-efa29ae5d5c9/volumes" Nov 22 12:21:19 crc kubenswrapper[4772]: I1122 12:21:19.437689 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82313d39-be83-4193-b2ef-fc54a9f6ceae" path="/var/lib/kubelet/pods/82313d39-be83-4193-b2ef-fc54a9f6ceae/volumes" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.712989 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b"] Nov 22 12:21:24 crc kubenswrapper[4772]: E1122 12:21:24.715460 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerName="extract" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.715552 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerName="extract" Nov 22 12:21:24 crc kubenswrapper[4772]: E1122 12:21:24.715627 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerName="pull" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.715682 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerName="pull" Nov 22 12:21:24 crc kubenswrapper[4772]: E1122 12:21:24.715761 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerName="util" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.715815 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerName="util" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.716093 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38b27e1-2a6a-4017-9fad-b2d172ce1662" containerName="extract" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.717182 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.723414 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-26q6p" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.723811 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.724436 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.731829 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b"] Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.820126 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j"] Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.821749 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.829431 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.829987 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-tdvfk" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.841177 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2ks\" (UniqueName: \"kubernetes.io/projected/3182734e-a9a5-4658-be8c-f6c32e1eef21-kube-api-access-hn2ks\") pod \"obo-prometheus-operator-668cf9dfbb-btm2b\" (UID: \"3182734e-a9a5-4658-be8c-f6c32e1eef21\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.843042 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j"] Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.853652 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf"] Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.855386 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.881722 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf"] Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.943344 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2ks\" (UniqueName: \"kubernetes.io/projected/3182734e-a9a5-4658-be8c-f6c32e1eef21-kube-api-access-hn2ks\") pod \"obo-prometheus-operator-668cf9dfbb-btm2b\" (UID: \"3182734e-a9a5-4658-be8c-f6c32e1eef21\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.943435 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02c13927-ba51-452d-a0ea-2a021cb4ce6c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf\" (UID: \"02c13927-ba51-452d-a0ea-2a021cb4ce6c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.943471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0dd53222-e946-4ee9-8d69-a2b4fd675e99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j\" (UID: \"0dd53222-e946-4ee9-8d69-a2b4fd675e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.943497 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02c13927-ba51-452d-a0ea-2a021cb4ce6c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf\" (UID: \"02c13927-ba51-452d-a0ea-2a021cb4ce6c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.943542 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dd53222-e946-4ee9-8d69-a2b4fd675e99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j\" (UID: \"0dd53222-e946-4ee9-8d69-a2b4fd675e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:24 crc kubenswrapper[4772]: I1122 12:21:24.969208 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2ks\" (UniqueName: \"kubernetes.io/projected/3182734e-a9a5-4658-be8c-f6c32e1eef21-kube-api-access-hn2ks\") pod \"obo-prometheus-operator-668cf9dfbb-btm2b\" (UID: \"3182734e-a9a5-4658-be8c-f6c32e1eef21\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.036351 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-plr84"] Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.038299 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.039403 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.042893 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-nbwjn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.043218 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.048556 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02c13927-ba51-452d-a0ea-2a021cb4ce6c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf\" (UID: \"02c13927-ba51-452d-a0ea-2a021cb4ce6c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.048630 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0dd53222-e946-4ee9-8d69-a2b4fd675e99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j\" (UID: \"0dd53222-e946-4ee9-8d69-a2b4fd675e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.048672 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02c13927-ba51-452d-a0ea-2a021cb4ce6c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf\" (UID: \"02c13927-ba51-452d-a0ea-2a021cb4ce6c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.048751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dd53222-e946-4ee9-8d69-a2b4fd675e99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j\" (UID: \"0dd53222-e946-4ee9-8d69-a2b4fd675e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.053962 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/02c13927-ba51-452d-a0ea-2a021cb4ce6c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf\" (UID: \"02c13927-ba51-452d-a0ea-2a021cb4ce6c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.055307 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0dd53222-e946-4ee9-8d69-a2b4fd675e99-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j\" (UID: \"0dd53222-e946-4ee9-8d69-a2b4fd675e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.057404 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02c13927-ba51-452d-a0ea-2a021cb4ce6c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf\" (UID: \"02c13927-ba51-452d-a0ea-2a021cb4ce6c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.063188 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0dd53222-e946-4ee9-8d69-a2b4fd675e99-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j\" (UID: \"0dd53222-e946-4ee9-8d69-a2b4fd675e99\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.078043 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-plr84"] Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.154684 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hj9l\" (UniqueName: \"kubernetes.io/projected/2ff2c225-e7b9-4d2b-bb81-30147198a90d-kube-api-access-2hj9l\") pod \"observability-operator-d8bb48f5d-plr84\" (UID: \"2ff2c225-e7b9-4d2b-bb81-30147198a90d\") " pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.154836 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ff2c225-e7b9-4d2b-bb81-30147198a90d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-plr84\" (UID: \"2ff2c225-e7b9-4d2b-bb81-30147198a90d\") " pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.160641 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.190768 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.264395 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-2pfzn"] Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.265507 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ff2c225-e7b9-4d2b-bb81-30147198a90d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-plr84\" (UID: \"2ff2c225-e7b9-4d2b-bb81-30147198a90d\") " pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.265675 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hj9l\" (UniqueName: \"kubernetes.io/projected/2ff2c225-e7b9-4d2b-bb81-30147198a90d-kube-api-access-2hj9l\") pod \"observability-operator-d8bb48f5d-plr84\" (UID: \"2ff2c225-e7b9-4d2b-bb81-30147198a90d\") " pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.295219 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ff2c225-e7b9-4d2b-bb81-30147198a90d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-plr84\" (UID: \"2ff2c225-e7b9-4d2b-bb81-30147198a90d\") " pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.296812 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.297254 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-2pfzn"] Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.302411 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-mzq6v" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.313938 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hj9l\" (UniqueName: \"kubernetes.io/projected/2ff2c225-e7b9-4d2b-bb81-30147198a90d-kube-api-access-2hj9l\") pod \"observability-operator-d8bb48f5d-plr84\" (UID: \"2ff2c225-e7b9-4d2b-bb81-30147198a90d\") " pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.367940 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/643d51b2-9282-4236-b0dd-282638c093e2-openshift-service-ca\") pod \"perses-operator-5446b9c989-2pfzn\" (UID: \"643d51b2-9282-4236-b0dd-282638c093e2\") " pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.368789 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjht\" (UniqueName: \"kubernetes.io/projected/643d51b2-9282-4236-b0dd-282638c093e2-kube-api-access-ksjht\") pod \"perses-operator-5446b9c989-2pfzn\" (UID: \"643d51b2-9282-4236-b0dd-282638c093e2\") " pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.471463 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/643d51b2-9282-4236-b0dd-282638c093e2-openshift-service-ca\") pod \"perses-operator-5446b9c989-2pfzn\" (UID: \"643d51b2-9282-4236-b0dd-282638c093e2\") " pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.471506 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjht\" (UniqueName: \"kubernetes.io/projected/643d51b2-9282-4236-b0dd-282638c093e2-kube-api-access-ksjht\") pod \"perses-operator-5446b9c989-2pfzn\" (UID: \"643d51b2-9282-4236-b0dd-282638c093e2\") " pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.473020 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/643d51b2-9282-4236-b0dd-282638c093e2-openshift-service-ca\") pod \"perses-operator-5446b9c989-2pfzn\" (UID: \"643d51b2-9282-4236-b0dd-282638c093e2\") " pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.497031 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjht\" (UniqueName: \"kubernetes.io/projected/643d51b2-9282-4236-b0dd-282638c093e2-kube-api-access-ksjht\") pod \"perses-operator-5446b9c989-2pfzn\" (UID: \"643d51b2-9282-4236-b0dd-282638c093e2\") " pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.497602 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.738501 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:25 crc kubenswrapper[4772]: I1122 12:21:25.902651 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b"] Nov 22 12:21:26 crc kubenswrapper[4772]: W1122 12:21:26.030939 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c13927_ba51_452d_a0ea_2a021cb4ce6c.slice/crio-8f36ef19a8a965ff94b420294bb00c52ee18b8ccbaf70784ba0edeafab7fd06e WatchSource:0}: Error finding container 8f36ef19a8a965ff94b420294bb00c52ee18b8ccbaf70784ba0edeafab7fd06e: Status 404 returned error can't find the container with id 8f36ef19a8a965ff94b420294bb00c52ee18b8ccbaf70784ba0edeafab7fd06e Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.043154 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf"] Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.072817 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j"] Nov 22 12:21:26 crc kubenswrapper[4772]: W1122 12:21:26.094017 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dd53222_e946_4ee9_8d69_a2b4fd675e99.slice/crio-1b35ae7e79ea03c31e956b70fd398b9039bccd8333f0f160a1b8372b82937df3 WatchSource:0}: Error finding container 1b35ae7e79ea03c31e956b70fd398b9039bccd8333f0f160a1b8372b82937df3: Status 404 returned error can't find the container with id 1b35ae7e79ea03c31e956b70fd398b9039bccd8333f0f160a1b8372b82937df3 Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.223855 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-plr84"] Nov 22 12:21:26 crc kubenswrapper[4772]: W1122 12:21:26.433206 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod643d51b2_9282_4236_b0dd_282638c093e2.slice/crio-1e53affdaab7d86fc28ca2da7d070be28e0213b578ad9b53ca027a38504f6ae7 WatchSource:0}: Error finding container 1e53affdaab7d86fc28ca2da7d070be28e0213b578ad9b53ca027a38504f6ae7: Status 404 returned error can't find the container with id 1e53affdaab7d86fc28ca2da7d070be28e0213b578ad9b53ca027a38504f6ae7 Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.435782 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-2pfzn"] Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.821586 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-plr84" event={"ID":"2ff2c225-e7b9-4d2b-bb81-30147198a90d","Type":"ContainerStarted","Data":"709637868c739e96537deeaeff4adbdee17b21cf089d04426b931c19f20e11b5"} Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.823451 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" event={"ID":"02c13927-ba51-452d-a0ea-2a021cb4ce6c","Type":"ContainerStarted","Data":"8f36ef19a8a965ff94b420294bb00c52ee18b8ccbaf70784ba0edeafab7fd06e"} Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.828843 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" event={"ID":"0dd53222-e946-4ee9-8d69-a2b4fd675e99","Type":"ContainerStarted","Data":"1b35ae7e79ea03c31e956b70fd398b9039bccd8333f0f160a1b8372b82937df3"} Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.830998 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-2pfzn" event={"ID":"643d51b2-9282-4236-b0dd-282638c093e2","Type":"ContainerStarted","Data":"1e53affdaab7d86fc28ca2da7d070be28e0213b578ad9b53ca027a38504f6ae7"} Nov 22 12:21:26 crc kubenswrapper[4772]: I1122 12:21:26.832654 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" event={"ID":"3182734e-a9a5-4658-be8c-f6c32e1eef21","Type":"ContainerStarted","Data":"25266e0c869195be2233d78ad21e14d6b3a03756c32e0aafd6761915a5abd154"} Nov 22 12:21:30 crc kubenswrapper[4772]: I1122 12:21:30.051725 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0005-account-create-js4rt"] Nov 22 12:21:30 crc kubenswrapper[4772]: I1122 12:21:30.060964 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7f7a-account-create-kcdng"] Nov 22 12:21:30 crc kubenswrapper[4772]: I1122 12:21:30.071598 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b3a9-account-create-jk7cb"] Nov 22 12:21:30 crc kubenswrapper[4772]: I1122 12:21:30.081192 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0005-account-create-js4rt"] Nov 22 12:21:30 crc kubenswrapper[4772]: I1122 12:21:30.088868 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7f7a-account-create-kcdng"] Nov 22 12:21:30 crc kubenswrapper[4772]: I1122 12:21:30.096318 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b3a9-account-create-jk7cb"] Nov 22 12:21:31 crc kubenswrapper[4772]: I1122 12:21:31.434584 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22ccc1a5-522d-4cf2-94d0-7f1122d28a33" path="/var/lib/kubelet/pods/22ccc1a5-522d-4cf2-94d0-7f1122d28a33/volumes" Nov 22 12:21:31 crc kubenswrapper[4772]: I1122 12:21:31.436332 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568b1330-d837-411c-b6e3-2ace97a73e7b" path="/var/lib/kubelet/pods/568b1330-d837-411c-b6e3-2ace97a73e7b/volumes" Nov 22 12:21:31 crc kubenswrapper[4772]: I1122 12:21:31.437025 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f441febd-8436-4440-9e7b-9514e36d0c1d" path="/var/lib/kubelet/pods/f441febd-8436-4440-9e7b-9514e36d0c1d/volumes" Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.966985 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" event={"ID":"3182734e-a9a5-4658-be8c-f6c32e1eef21","Type":"ContainerStarted","Data":"e8598103660c3269e8b735f2fc9b876f3103d560a1d04780990454871a39ca40"} Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.973611 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" event={"ID":"02c13927-ba51-452d-a0ea-2a021cb4ce6c","Type":"ContainerStarted","Data":"d180302725b46bf57989ce58c952a5289145cbdf3895719ab4a3e71726f1fb32"} Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.977868 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-plr84" event={"ID":"2ff2c225-e7b9-4d2b-bb81-30147198a90d","Type":"ContainerStarted","Data":"79bcf12f7f568dfb465a84475809581a391a7a9f5095eea16d17bbbbe2bd89cb"} Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.978265 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.981380 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" event={"ID":"0dd53222-e946-4ee9-8d69-a2b4fd675e99","Type":"ContainerStarted","Data":"10d9f6ce9610421caaf13a426598ca71d4ae3a76519c0581537af4cd47036680"} Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.983605 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-plr84" Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.990277 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-2pfzn" event={"ID":"643d51b2-9282-4236-b0dd-282638c093e2","Type":"ContainerStarted","Data":"551f224729ef84690dac8495dffb57c86a96a07dd585e4aa2ad08a36b8d92620"} Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.991144 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:35 crc kubenswrapper[4772]: I1122 12:21:35.993384 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-btm2b" podStartSLOduration=6.3936979879999996 podStartE2EDuration="11.993365977s" podCreationTimestamp="2025-11-22 12:21:24 +0000 UTC" firstStartedPulling="2025-11-22 12:21:25.928695158 +0000 UTC m=+6206.168139652" lastFinishedPulling="2025-11-22 12:21:31.528363147 +0000 UTC m=+6211.767807641" observedRunningTime="2025-11-22 12:21:35.985519741 +0000 UTC m=+6216.224964235" watchObservedRunningTime="2025-11-22 12:21:35.993365977 +0000 UTC m=+6216.232810471" Nov 22 12:21:36 crc kubenswrapper[4772]: I1122 12:21:36.022361 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf" podStartSLOduration=6.545728502 podStartE2EDuration="12.022338538s" podCreationTimestamp="2025-11-22 12:21:24 +0000 UTC" firstStartedPulling="2025-11-22 12:21:26.0432683 +0000 UTC m=+6206.282712794" lastFinishedPulling="2025-11-22 12:21:31.519878336 +0000 UTC m=+6211.759322830" observedRunningTime="2025-11-22 12:21:36.012402181 +0000 UTC m=+6216.251846675" watchObservedRunningTime="2025-11-22 12:21:36.022338538 +0000 UTC m=+6216.261783032" Nov 22 12:21:36 crc kubenswrapper[4772]: I1122 12:21:36.054567 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-plr84" podStartSLOduration=2.492052273 podStartE2EDuration="11.05454386s" podCreationTimestamp="2025-11-22 12:21:25 +0000 UTC" firstStartedPulling="2025-11-22 12:21:26.238647805 +0000 UTC m=+6206.478092299" lastFinishedPulling="2025-11-22 12:21:34.801139392 +0000 UTC m=+6215.040583886" observedRunningTime="2025-11-22 12:21:36.054119549 +0000 UTC m=+6216.293564043" watchObservedRunningTime="2025-11-22 12:21:36.05454386 +0000 UTC m=+6216.293988354" Nov 22 12:21:36 crc kubenswrapper[4772]: I1122 12:21:36.101463 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j" podStartSLOduration=6.6910403800000005 podStartE2EDuration="12.101436107s" podCreationTimestamp="2025-11-22 12:21:24 +0000 UTC" firstStartedPulling="2025-11-22 12:21:26.109444678 +0000 UTC m=+6206.348889172" lastFinishedPulling="2025-11-22 12:21:31.519840405 +0000 UTC m=+6211.759284899" observedRunningTime="2025-11-22 12:21:36.096502694 +0000 UTC m=+6216.335947188" watchObservedRunningTime="2025-11-22 12:21:36.101436107 +0000 UTC m=+6216.340880601" Nov 22 12:21:36 crc kubenswrapper[4772]: I1122 12:21:36.148988 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-2pfzn" podStartSLOduration=6.031105627 podStartE2EDuration="11.148964621s" podCreationTimestamp="2025-11-22 12:21:25 +0000 UTC" firstStartedPulling="2025-11-22 12:21:26.435942417 +0000 UTC m=+6206.675386911" lastFinishedPulling="2025-11-22 12:21:31.553801411 +0000 UTC m=+6211.793245905" observedRunningTime="2025-11-22 12:21:36.119388564 +0000 UTC m=+6216.358833058" watchObservedRunningTime="2025-11-22 12:21:36.148964621 +0000 UTC m=+6216.388409115" Nov 22 12:21:39 crc kubenswrapper[4772]: I1122 12:21:39.050816 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cmq55"] Nov 22 12:21:39 crc kubenswrapper[4772]: I1122 12:21:39.065711 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cmq55"] Nov 22 12:21:39 crc kubenswrapper[4772]: I1122 12:21:39.434577 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d84fec3-5230-479b-8b1e-4ca40adc8928" path="/var/lib/kubelet/pods/8d84fec3-5230-479b-8b1e-4ca40adc8928/volumes" Nov 22 12:21:45 crc kubenswrapper[4772]: I1122 12:21:45.743662 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-2pfzn" Nov 22 12:21:47 crc kubenswrapper[4772]: I1122 12:21:47.991700 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 12:21:47 crc kubenswrapper[4772]: I1122 12:21:47.992806 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" containerName="openstackclient" containerID="cri-o://e59d13e4a70c95777103cbe263f159518f98532e57de53b8c01e007e1c65a7a0" gracePeriod=2 Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.002058 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.137866 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 12:21:48 crc kubenswrapper[4772]: E1122 12:21:48.138456 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" containerName="openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.138474 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" containerName="openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.138684 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" containerName="openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.148177 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.165411 4772 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" podUID="6a859593-08fa-42bc-9d33-defd6ad05df9" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.171262 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.306421 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a859593-08fa-42bc-9d33-defd6ad05df9-openstack-config-secret\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.306587 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgzb\" (UniqueName: \"kubernetes.io/projected/6a859593-08fa-42bc-9d33-defd6ad05df9-kube-api-access-fsgzb\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.306613 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a859593-08fa-42bc-9d33-defd6ad05df9-openstack-config\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.409086 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgzb\" (UniqueName: \"kubernetes.io/projected/6a859593-08fa-42bc-9d33-defd6ad05df9-kube-api-access-fsgzb\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.409137 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a859593-08fa-42bc-9d33-defd6ad05df9-openstack-config\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.409211 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a859593-08fa-42bc-9d33-defd6ad05df9-openstack-config-secret\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.410586 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a859593-08fa-42bc-9d33-defd6ad05df9-openstack-config\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.441920 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a859593-08fa-42bc-9d33-defd6ad05df9-openstack-config-secret\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.454648 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgzb\" (UniqueName: \"kubernetes.io/projected/6a859593-08fa-42bc-9d33-defd6ad05df9-kube-api-access-fsgzb\") pod \"openstackclient\" (UID: \"6a859593-08fa-42bc-9d33-defd6ad05df9\") " pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.499912 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.735917 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.747406 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.757936 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-56jsk" Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.790909 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 12:21:48 crc kubenswrapper[4772]: I1122 12:21:48.927859 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpgx6\" (UniqueName: \"kubernetes.io/projected/bb9d0b8b-8de9-4563-98a5-0f494973330e-kube-api-access-kpgx6\") pod \"kube-state-metrics-0\" (UID: \"bb9d0b8b-8de9-4563-98a5-0f494973330e\") " pod="openstack/kube-state-metrics-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.029885 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpgx6\" (UniqueName: \"kubernetes.io/projected/bb9d0b8b-8de9-4563-98a5-0f494973330e-kube-api-access-kpgx6\") pod \"kube-state-metrics-0\" (UID: \"bb9d0b8b-8de9-4563-98a5-0f494973330e\") " pod="openstack/kube-state-metrics-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.060322 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpgx6\" (UniqueName: \"kubernetes.io/projected/bb9d0b8b-8de9-4563-98a5-0f494973330e-kube-api-access-kpgx6\") pod \"kube-state-metrics-0\" (UID: \"bb9d0b8b-8de9-4563-98a5-0f494973330e\") " pod="openstack/kube-state-metrics-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.190419 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.629513 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.633647 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.639939 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.642380 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.642412 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-p9559" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.642611 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.642681 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.663097 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.726167 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qskd9\" (UniqueName: \"kubernetes.io/projected/ee08a124-951b-4447-ad8a-7178df62bc7c-kube-api-access-qskd9\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.726226 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ee08a124-951b-4447-ad8a-7178df62bc7c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.726264 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee08a124-951b-4447-ad8a-7178df62bc7c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.726390 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.726459 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.726509 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.726557 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee08a124-951b-4447-ad8a-7178df62bc7c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829014 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ee08a124-951b-4447-ad8a-7178df62bc7c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829096 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee08a124-951b-4447-ad8a-7178df62bc7c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829180 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829263 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829316 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829361 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee08a124-951b-4447-ad8a-7178df62bc7c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829403 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qskd9\" (UniqueName: \"kubernetes.io/projected/ee08a124-951b-4447-ad8a-7178df62bc7c-kube-api-access-qskd9\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.829633 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ee08a124-951b-4447-ad8a-7178df62bc7c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.846066 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.862436 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.864545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee08a124-951b-4447-ad8a-7178df62bc7c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.865669 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ee08a124-951b-4447-ad8a-7178df62bc7c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.865709 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee08a124-951b-4447-ad8a-7178df62bc7c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.868390 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.872777 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qskd9\" (UniqueName: \"kubernetes.io/projected/ee08a124-951b-4447-ad8a-7178df62bc7c-kube-api-access-qskd9\") pod \"alertmanager-metric-storage-0\" (UID: \"ee08a124-951b-4447-ad8a-7178df62bc7c\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:49 crc kubenswrapper[4772]: W1122 12:21:49.880300 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a859593_08fa_42bc_9d33_defd6ad05df9.slice/crio-2ca0d83c815fd4186c1ccb3ef82b252bc1b4b3c1dc03b00d1a3986e588a32430 WatchSource:0}: Error finding container 2ca0d83c815fd4186c1ccb3ef82b252bc1b4b3c1dc03b00d1a3986e588a32430: Status 404 returned error can't find the container with id 2ca0d83c815fd4186c1ccb3ef82b252bc1b4b3c1dc03b00d1a3986e588a32430 Nov 22 12:21:49 crc kubenswrapper[4772]: I1122 12:21:49.987178 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.081874 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.153403 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.156223 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.159805 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.159850 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.160026 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.160315 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vrzd7" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.160501 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.169608 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.194949 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 12:21:50 crc kubenswrapper[4772]: W1122 12:21:50.215794 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb9d0b8b_8de9_4563_98a5_0f494973330e.slice/crio-67299f03c458e5a727bb244e204eb505b693eed14f1ce5309039a4e06dee1aad WatchSource:0}: Error finding container 67299f03c458e5a727bb244e204eb505b693eed14f1ce5309039a4e06dee1aad: Status 404 returned error can't find the container with id 67299f03c458e5a727bb244e204eb505b693eed14f1ce5309039a4e06dee1aad Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.247448 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6a859593-08fa-42bc-9d33-defd6ad05df9","Type":"ContainerStarted","Data":"2ca0d83c815fd4186c1ccb3ef82b252bc1b4b3c1dc03b00d1a3986e588a32430"} Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.257544 4772 generic.go:334] "Generic (PLEG): container finished" podID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" containerID="e59d13e4a70c95777103cbe263f159518f98532e57de53b8c01e007e1c65a7a0" exitCode=137 Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.273864 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb9d0b8b-8de9-4563-98a5-0f494973330e","Type":"ContainerStarted","Data":"67299f03c458e5a727bb244e204eb505b693eed14f1ce5309039a4e06dee1aad"} Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.356767 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.356868 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/248a6987-edb6-4837-9d52-ee1144ad1996-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.357225 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/248a6987-edb6-4837-9d52-ee1144ad1996-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.357288 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/248a6987-edb6-4837-9d52-ee1144ad1996-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.357347 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.357369 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-config\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.357462 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.357658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878j7\" (UniqueName: \"kubernetes.io/projected/248a6987-edb6-4837-9d52-ee1144ad1996-kube-api-access-878j7\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460171 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/248a6987-edb6-4837-9d52-ee1144ad1996-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460712 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/248a6987-edb6-4837-9d52-ee1144ad1996-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460732 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/248a6987-edb6-4837-9d52-ee1144ad1996-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460755 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460775 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-config\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460806 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460856 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878j7\" (UniqueName: \"kubernetes.io/projected/248a6987-edb6-4837-9d52-ee1144ad1996-kube-api-access-878j7\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.460916 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.461406 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/248a6987-edb6-4837-9d52-ee1144ad1996-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.471934 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.476582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/248a6987-edb6-4837-9d52-ee1144ad1996-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.478738 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/248a6987-edb6-4837-9d52-ee1144ad1996-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.496566 4772 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.496623 4772 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46dba152d70a0063b02f14a75e3e382d42f3d97ab652667da0e163008a3b11af/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.497457 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.524609 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878j7\" (UniqueName: \"kubernetes.io/projected/248a6987-edb6-4837-9d52-ee1144ad1996-kube-api-access-878j7\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.525752 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/248a6987-edb6-4837-9d52-ee1144ad1996-config\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.752895 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-37189c08-4ce8-42eb-bd3f-52d95f083abb\") pod \"prometheus-metric-storage-0\" (UID: \"248a6987-edb6-4837-9d52-ee1144ad1996\") " pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.782515 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.827197 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.879253 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xl5b\" (UniqueName: \"kubernetes.io/projected/242432e7-1b75-4aeb-9ea7-c8c790f242a9-kube-api-access-8xl5b\") pod \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.879305 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config\") pod \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.879493 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config-secret\") pod \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\" (UID: \"242432e7-1b75-4aeb-9ea7-c8c790f242a9\") " Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.887337 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.888683 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242432e7-1b75-4aeb-9ea7-c8c790f242a9-kube-api-access-8xl5b" (OuterVolumeSpecName: "kube-api-access-8xl5b") pod "242432e7-1b75-4aeb-9ea7-c8c790f242a9" (UID: "242432e7-1b75-4aeb-9ea7-c8c790f242a9"). InnerVolumeSpecName "kube-api-access-8xl5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.933979 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "242432e7-1b75-4aeb-9ea7-c8c790f242a9" (UID: "242432e7-1b75-4aeb-9ea7-c8c790f242a9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.958068 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "242432e7-1b75-4aeb-9ea7-c8c790f242a9" (UID: "242432e7-1b75-4aeb-9ea7-c8c790f242a9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.984241 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.984298 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xl5b\" (UniqueName: \"kubernetes.io/projected/242432e7-1b75-4aeb-9ea7-c8c790f242a9-kube-api-access-8xl5b\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:50 crc kubenswrapper[4772]: I1122 12:21:50.984312 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/242432e7-1b75-4aeb-9ea7-c8c790f242a9-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.288403 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.288451 4772 scope.go:117] "RemoveContainer" containerID="e59d13e4a70c95777103cbe263f159518f98532e57de53b8c01e007e1c65a7a0" Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.290978 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ee08a124-951b-4447-ad8a-7178df62bc7c","Type":"ContainerStarted","Data":"2c63ad752e291807a8ab939fd8fe758e39df5e75896983ae297f53d9ff6d72e2"} Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.293514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb9d0b8b-8de9-4563-98a5-0f494973330e","Type":"ContainerStarted","Data":"50aef3b40c126712e35ca8899584ff8e3e53807d4651145195a07afeb7b05b7c"} Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.293729 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.295988 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6a859593-08fa-42bc-9d33-defd6ad05df9","Type":"ContainerStarted","Data":"5b61bfc60994366a6c1551e784d64721d8726d6c1f0616ea60c9832731e17b1a"} Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.320023 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.81725581 podStartE2EDuration="3.319971467s" podCreationTimestamp="2025-11-22 12:21:48 +0000 UTC" firstStartedPulling="2025-11-22 12:21:50.231706761 +0000 UTC m=+6230.471151255" lastFinishedPulling="2025-11-22 12:21:50.734422418 +0000 UTC m=+6230.973866912" observedRunningTime="2025-11-22 12:21:51.315208849 +0000 UTC m=+6231.554653343" watchObservedRunningTime="2025-11-22 12:21:51.319971467 +0000 UTC m=+6231.559415961" Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.320820 4772 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" podUID="6a859593-08fa-42bc-9d33-defd6ad05df9" Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.339725 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.339695248 podStartE2EDuration="3.339695248s" podCreationTimestamp="2025-11-22 12:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:21:51.338909379 +0000 UTC m=+6231.578353883" watchObservedRunningTime="2025-11-22 12:21:51.339695248 +0000 UTC m=+6231.579139752" Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.395814 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 12:21:51 crc kubenswrapper[4772]: I1122 12:21:51.453244 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242432e7-1b75-4aeb-9ea7-c8c790f242a9" path="/var/lib/kubelet/pods/242432e7-1b75-4aeb-9ea7-c8c790f242a9/volumes" Nov 22 12:21:52 crc kubenswrapper[4772]: I1122 12:21:52.059227 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pr6j5"] Nov 22 12:21:52 crc kubenswrapper[4772]: I1122 12:21:52.072260 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pr6j5"] Nov 22 12:21:52 crc kubenswrapper[4772]: I1122 12:21:52.322610 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"248a6987-edb6-4837-9d52-ee1144ad1996","Type":"ContainerStarted","Data":"5c8b9621d0343ce35094223de04cc700aa5d67587aeb6a1e7f795a1414a015ea"} Nov 22 12:21:53 crc kubenswrapper[4772]: I1122 12:21:53.034603 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-djw46"] Nov 22 12:21:53 crc kubenswrapper[4772]: I1122 12:21:53.052383 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-djw46"] Nov 22 12:21:53 crc kubenswrapper[4772]: I1122 12:21:53.431019 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237dfef1-e073-42dc-8430-802b906015e7" path="/var/lib/kubelet/pods/237dfef1-e073-42dc-8430-802b906015e7/volumes" Nov 22 12:21:53 crc kubenswrapper[4772]: I1122 12:21:53.431873 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76dd9bf3-3e2b-4ff9-b1e2-598d8167622c" path="/var/lib/kubelet/pods/76dd9bf3-3e2b-4ff9-b1e2-598d8167622c/volumes" Nov 22 12:21:58 crc kubenswrapper[4772]: I1122 12:21:58.402473 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"248a6987-edb6-4837-9d52-ee1144ad1996","Type":"ContainerStarted","Data":"1a06cc5bd162f813d4592e35ace43a8b486a29c633528080957b16d4dd6f6b80"} Nov 22 12:21:58 crc kubenswrapper[4772]: I1122 12:21:58.406766 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ee08a124-951b-4447-ad8a-7178df62bc7c","Type":"ContainerStarted","Data":"f59ea4249ecdb25b7d82c10289e81b03c06ad6dbfbc5e4e504f91c17be918491"} Nov 22 12:21:59 crc kubenswrapper[4772]: I1122 12:21:59.200789 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 12:22:01 crc kubenswrapper[4772]: I1122 12:22:01.533694 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:22:01 crc kubenswrapper[4772]: I1122 12:22:01.534613 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:22:07 crc kubenswrapper[4772]: I1122 12:22:07.067248 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k458h"] Nov 22 12:22:07 crc kubenswrapper[4772]: I1122 12:22:07.087715 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k458h"] Nov 22 12:22:07 crc kubenswrapper[4772]: I1122 12:22:07.437963 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a50c316c-347c-455b-b0ec-93d5a1c07934" path="/var/lib/kubelet/pods/a50c316c-347c-455b-b0ec-93d5a1c07934/volumes" Nov 22 12:22:07 crc kubenswrapper[4772]: I1122 12:22:07.520902 4772 generic.go:334] "Generic (PLEG): container finished" podID="ee08a124-951b-4447-ad8a-7178df62bc7c" containerID="f59ea4249ecdb25b7d82c10289e81b03c06ad6dbfbc5e4e504f91c17be918491" exitCode=0 Nov 22 12:22:07 crc kubenswrapper[4772]: I1122 12:22:07.520969 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ee08a124-951b-4447-ad8a-7178df62bc7c","Type":"ContainerDied","Data":"f59ea4249ecdb25b7d82c10289e81b03c06ad6dbfbc5e4e504f91c17be918491"} Nov 22 12:22:07 crc kubenswrapper[4772]: I1122 12:22:07.524674 4772 generic.go:334] "Generic (PLEG): container finished" podID="248a6987-edb6-4837-9d52-ee1144ad1996" containerID="1a06cc5bd162f813d4592e35ace43a8b486a29c633528080957b16d4dd6f6b80" exitCode=0 Nov 22 12:22:07 crc kubenswrapper[4772]: I1122 12:22:07.524712 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"248a6987-edb6-4837-9d52-ee1144ad1996","Type":"ContainerDied","Data":"1a06cc5bd162f813d4592e35ace43a8b486a29c633528080957b16d4dd6f6b80"} Nov 22 12:22:10 crc kubenswrapper[4772]: I1122 12:22:10.566217 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ee08a124-951b-4447-ad8a-7178df62bc7c","Type":"ContainerStarted","Data":"fe52d8589e9a4c63dc4b993d39fcf4c65ee62467808bbb2d9a64d6ef856c5815"} Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.152615 4772 scope.go:117] "RemoveContainer" containerID="8810095c6d564f0eb04055f48e6a73cfd9045a307dd7d90253b237943519663f" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.475958 4772 scope.go:117] "RemoveContainer" containerID="ea9aa0bf7c1baff74ec4f40b2e28d1595dbd35bab973ffd9d6ba78108553b35d" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.626273 4772 scope.go:117] "RemoveContainer" containerID="f4e1a0c1ca9b92ac0ed967aa10f58c252b6b67a351469ed7ac2d0c9d35d10d56" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.632472 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ee08a124-951b-4447-ad8a-7178df62bc7c","Type":"ContainerStarted","Data":"ceb4400b8a22dae9c3cc063be9dc9a826dba6575cf78d825b424a8cc042a0d7c"} Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.633086 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.653168 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.699911 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.592428512 podStartE2EDuration="25.699882156s" podCreationTimestamp="2025-11-22 12:21:49 +0000 UTC" firstStartedPulling="2025-11-22 12:21:50.909578119 +0000 UTC m=+6231.149022613" lastFinishedPulling="2025-11-22 12:22:10.017031763 +0000 UTC m=+6250.256476257" observedRunningTime="2025-11-22 12:22:14.689243861 +0000 UTC m=+6254.928688365" watchObservedRunningTime="2025-11-22 12:22:14.699882156 +0000 UTC m=+6254.939326650" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.786219 4772 scope.go:117] "RemoveContainer" containerID="ae14805617ce3032fa2ca167cec6a5c87257f96d8d9aeb775ef4cf1ebdca6ce0" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.879233 4772 scope.go:117] "RemoveContainer" containerID="587f321ffc74c7bea714d920af2c48f5b89382eb202c58b246a83ef042b21653" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.918193 4772 scope.go:117] "RemoveContainer" containerID="4b75f4b7dd1e6ec7e5650b4d1ad71341fa508812a0b07728ce9609fa7fd56f82" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.961042 4772 scope.go:117] "RemoveContainer" containerID="f1d5509ce400d11ad56b5e23986f1de49d1d1110bcd4949705fe7f5db456edf4" Nov 22 12:22:14 crc kubenswrapper[4772]: I1122 12:22:14.998706 4772 scope.go:117] "RemoveContainer" containerID="1d0435c73fc7da817dcfea45c654eaf0f5a9ebbfd50ce85c1abd07cd6c6168eb" Nov 22 12:22:15 crc kubenswrapper[4772]: I1122 12:22:15.057948 4772 scope.go:117] "RemoveContainer" containerID="975f8f00019cedb23c43c9153b469bed506b6112732c4558e3f1ae829c926dce" Nov 22 12:22:15 crc kubenswrapper[4772]: I1122 12:22:15.118480 4772 scope.go:117] "RemoveContainer" containerID="6dc644dd739b3dc370a09046e7e77f2a4e7e6130aa1be761e4e0f58e1446ba01" Nov 22 12:22:15 crc kubenswrapper[4772]: I1122 12:22:15.645890 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"248a6987-edb6-4837-9d52-ee1144ad1996","Type":"ContainerStarted","Data":"770fa79da9ea2792c5ea8e3dfd72ef4ebfb8cd4dcea2ad00fe3b37cbab336b5c"} Nov 22 12:22:19 crc kubenswrapper[4772]: I1122 12:22:19.698303 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"248a6987-edb6-4837-9d52-ee1144ad1996","Type":"ContainerStarted","Data":"e1bf8d8f01df6ffca50663ccb5b4bcf35b5868c290edebfc8536d9399ba0f89b"} Nov 22 12:22:22 crc kubenswrapper[4772]: I1122 12:22:22.733274 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"248a6987-edb6-4837-9d52-ee1144ad1996","Type":"ContainerStarted","Data":"db0a61ca76ea4e0b7f164c1bde76a84bf2ab7c657634ebe09477d8c6e0935ab4"} Nov 22 12:22:22 crc kubenswrapper[4772]: I1122 12:22:22.760484 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=2.676793461 podStartE2EDuration="33.760457887s" podCreationTimestamp="2025-11-22 12:21:49 +0000 UTC" firstStartedPulling="2025-11-22 12:21:51.41123747 +0000 UTC m=+6231.650681964" lastFinishedPulling="2025-11-22 12:22:22.494901896 +0000 UTC m=+6262.734346390" observedRunningTime="2025-11-22 12:22:22.753212777 +0000 UTC m=+6262.992657301" watchObservedRunningTime="2025-11-22 12:22:22.760457887 +0000 UTC m=+6262.999902391" Nov 22 12:22:25 crc kubenswrapper[4772]: I1122 12:22:25.828846 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.457625 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.462403 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.466190 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.468339 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.476188 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.593968 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.594429 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.594511 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-config-data\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.594795 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-run-httpd\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.595035 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxb9c\" (UniqueName: \"kubernetes.io/projected/58ba43f9-5062-4b50-b5c3-14bcf3742218-kube-api-access-dxb9c\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.595307 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-log-httpd\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.595449 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-scripts\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.699256 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxb9c\" (UniqueName: \"kubernetes.io/projected/58ba43f9-5062-4b50-b5c3-14bcf3742218-kube-api-access-dxb9c\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.699338 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-log-httpd\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.699394 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-scripts\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.699539 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.699595 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.699690 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-config-data\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.700010 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-run-httpd\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.700776 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-run-httpd\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.701635 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-log-httpd\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.711074 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.725740 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-scripts\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.727590 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.728271 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-config-data\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.733203 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxb9c\" (UniqueName: \"kubernetes.io/projected/58ba43f9-5062-4b50-b5c3-14bcf3742218-kube-api-access-dxb9c\") pod \"ceilometer-0\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " pod="openstack/ceilometer-0" Nov 22 12:22:29 crc kubenswrapper[4772]: I1122 12:22:29.783920 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:22:30 crc kubenswrapper[4772]: I1122 12:22:30.291459 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:22:30 crc kubenswrapper[4772]: W1122 12:22:30.292370 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58ba43f9_5062_4b50_b5c3_14bcf3742218.slice/crio-66a9bc074b8c3a35d500162ea20aa2da891e233f351cb66eaa46bfddb86020e6 WatchSource:0}: Error finding container 66a9bc074b8c3a35d500162ea20aa2da891e233f351cb66eaa46bfddb86020e6: Status 404 returned error can't find the container with id 66a9bc074b8c3a35d500162ea20aa2da891e233f351cb66eaa46bfddb86020e6 Nov 22 12:22:30 crc kubenswrapper[4772]: I1122 12:22:30.858146 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerStarted","Data":"66a9bc074b8c3a35d500162ea20aa2da891e233f351cb66eaa46bfddb86020e6"} Nov 22 12:22:31 crc kubenswrapper[4772]: I1122 12:22:31.533122 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:22:31 crc kubenswrapper[4772]: I1122 12:22:31.533694 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:22:31 crc kubenswrapper[4772]: I1122 12:22:31.875085 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerStarted","Data":"0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a"} Nov 22 12:22:31 crc kubenswrapper[4772]: I1122 12:22:31.875676 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerStarted","Data":"aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f"} Nov 22 12:22:32 crc kubenswrapper[4772]: I1122 12:22:32.894870 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerStarted","Data":"22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14"} Nov 22 12:22:33 crc kubenswrapper[4772]: I1122 12:22:33.909551 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerStarted","Data":"17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f"} Nov 22 12:22:33 crc kubenswrapper[4772]: I1122 12:22:33.910438 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 12:22:33 crc kubenswrapper[4772]: I1122 12:22:33.950244 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.687478064 podStartE2EDuration="4.950213469s" podCreationTimestamp="2025-11-22 12:22:29 +0000 UTC" firstStartedPulling="2025-11-22 12:22:30.296106919 +0000 UTC m=+6270.535551413" lastFinishedPulling="2025-11-22 12:22:33.558842324 +0000 UTC m=+6273.798286818" observedRunningTime="2025-11-22 12:22:33.937563744 +0000 UTC m=+6274.177008248" watchObservedRunningTime="2025-11-22 12:22:33.950213469 +0000 UTC m=+6274.189657973" Nov 22 12:22:35 crc kubenswrapper[4772]: I1122 12:22:35.828568 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 22 12:22:35 crc kubenswrapper[4772]: I1122 12:22:35.831891 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 22 12:22:35 crc kubenswrapper[4772]: I1122 12:22:35.930783 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 22 12:22:41 crc kubenswrapper[4772]: I1122 12:22:41.105383 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-52rg9"] Nov 22 12:22:41 crc kubenswrapper[4772]: I1122 12:22:41.110774 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-52rg9" Nov 22 12:22:41 crc kubenswrapper[4772]: I1122 12:22:41.127006 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-52rg9"] Nov 22 12:22:41 crc kubenswrapper[4772]: I1122 12:22:41.212665 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztgp\" (UniqueName: \"kubernetes.io/projected/9109a9d1-b9a6-45a3-a515-21940fedf6b4-kube-api-access-9ztgp\") pod \"aodh-db-create-52rg9\" (UID: \"9109a9d1-b9a6-45a3-a515-21940fedf6b4\") " pod="openstack/aodh-db-create-52rg9" Nov 22 12:22:41 crc kubenswrapper[4772]: I1122 12:22:41.315555 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztgp\" (UniqueName: \"kubernetes.io/projected/9109a9d1-b9a6-45a3-a515-21940fedf6b4-kube-api-access-9ztgp\") pod \"aodh-db-create-52rg9\" (UID: \"9109a9d1-b9a6-45a3-a515-21940fedf6b4\") " pod="openstack/aodh-db-create-52rg9" Nov 22 12:22:41 crc kubenswrapper[4772]: I1122 12:22:41.347194 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztgp\" (UniqueName: \"kubernetes.io/projected/9109a9d1-b9a6-45a3-a515-21940fedf6b4-kube-api-access-9ztgp\") pod \"aodh-db-create-52rg9\" (UID: \"9109a9d1-b9a6-45a3-a515-21940fedf6b4\") " pod="openstack/aodh-db-create-52rg9" Nov 22 12:22:41 crc kubenswrapper[4772]: I1122 12:22:41.455810 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-52rg9" Nov 22 12:22:42 crc kubenswrapper[4772]: I1122 12:22:42.018402 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-52rg9"] Nov 22 12:22:43 crc kubenswrapper[4772]: I1122 12:22:43.008584 4772 generic.go:334] "Generic (PLEG): container finished" podID="9109a9d1-b9a6-45a3-a515-21940fedf6b4" containerID="133804c48f0a5a5dd9af802d337c2895f94fb39efc74b734ecd32426494a5bc5" exitCode=0 Nov 22 12:22:43 crc kubenswrapper[4772]: I1122 12:22:43.008704 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-52rg9" event={"ID":"9109a9d1-b9a6-45a3-a515-21940fedf6b4","Type":"ContainerDied","Data":"133804c48f0a5a5dd9af802d337c2895f94fb39efc74b734ecd32426494a5bc5"} Nov 22 12:22:43 crc kubenswrapper[4772]: I1122 12:22:43.009350 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-52rg9" event={"ID":"9109a9d1-b9a6-45a3-a515-21940fedf6b4","Type":"ContainerStarted","Data":"849fd8e6bbd55189604b8dc311d0d3d64a89358b0dc926b7c062bb5148d03c1e"} Nov 22 12:22:44 crc kubenswrapper[4772]: I1122 12:22:44.444064 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-52rg9" Nov 22 12:22:44 crc kubenswrapper[4772]: I1122 12:22:44.504810 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztgp\" (UniqueName: \"kubernetes.io/projected/9109a9d1-b9a6-45a3-a515-21940fedf6b4-kube-api-access-9ztgp\") pod \"9109a9d1-b9a6-45a3-a515-21940fedf6b4\" (UID: \"9109a9d1-b9a6-45a3-a515-21940fedf6b4\") " Nov 22 12:22:44 crc kubenswrapper[4772]: I1122 12:22:44.512409 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9109a9d1-b9a6-45a3-a515-21940fedf6b4-kube-api-access-9ztgp" (OuterVolumeSpecName: "kube-api-access-9ztgp") pod "9109a9d1-b9a6-45a3-a515-21940fedf6b4" (UID: "9109a9d1-b9a6-45a3-a515-21940fedf6b4"). InnerVolumeSpecName "kube-api-access-9ztgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:22:44 crc kubenswrapper[4772]: I1122 12:22:44.607827 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztgp\" (UniqueName: \"kubernetes.io/projected/9109a9d1-b9a6-45a3-a515-21940fedf6b4-kube-api-access-9ztgp\") on node \"crc\" DevicePath \"\"" Nov 22 12:22:45 crc kubenswrapper[4772]: I1122 12:22:45.041858 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-52rg9" event={"ID":"9109a9d1-b9a6-45a3-a515-21940fedf6b4","Type":"ContainerDied","Data":"849fd8e6bbd55189604b8dc311d0d3d64a89358b0dc926b7c062bb5148d03c1e"} Nov 22 12:22:45 crc kubenswrapper[4772]: I1122 12:22:45.041920 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="849fd8e6bbd55189604b8dc311d0d3d64a89358b0dc926b7c062bb5148d03c1e" Nov 22 12:22:45 crc kubenswrapper[4772]: I1122 12:22:45.041945 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-52rg9" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.050688 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dvsv9"] Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.061784 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dvsv9"] Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.286141 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-6efd-account-create-54tm2"] Nov 22 12:22:51 crc kubenswrapper[4772]: E1122 12:22:51.286839 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9109a9d1-b9a6-45a3-a515-21940fedf6b4" containerName="mariadb-database-create" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.286869 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9109a9d1-b9a6-45a3-a515-21940fedf6b4" containerName="mariadb-database-create" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.287261 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9109a9d1-b9a6-45a3-a515-21940fedf6b4" containerName="mariadb-database-create" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.289385 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6efd-account-create-54tm2" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.292325 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.298945 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6efd-account-create-54tm2"] Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.442659 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7afba77-d20a-40cd-8530-37d6aea1e247" path="/var/lib/kubelet/pods/e7afba77-d20a-40cd-8530-37d6aea1e247/volumes" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.461118 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8wb\" (UniqueName: \"kubernetes.io/projected/a92c433c-6bc6-4dbe-91a0-69a74202f6ab-kube-api-access-nx8wb\") pod \"aodh-6efd-account-create-54tm2\" (UID: \"a92c433c-6bc6-4dbe-91a0-69a74202f6ab\") " pod="openstack/aodh-6efd-account-create-54tm2" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.566035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8wb\" (UniqueName: \"kubernetes.io/projected/a92c433c-6bc6-4dbe-91a0-69a74202f6ab-kube-api-access-nx8wb\") pod \"aodh-6efd-account-create-54tm2\" (UID: \"a92c433c-6bc6-4dbe-91a0-69a74202f6ab\") " pod="openstack/aodh-6efd-account-create-54tm2" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.592501 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8wb\" (UniqueName: \"kubernetes.io/projected/a92c433c-6bc6-4dbe-91a0-69a74202f6ab-kube-api-access-nx8wb\") pod \"aodh-6efd-account-create-54tm2\" (UID: \"a92c433c-6bc6-4dbe-91a0-69a74202f6ab\") " pod="openstack/aodh-6efd-account-create-54tm2" Nov 22 12:22:51 crc kubenswrapper[4772]: I1122 12:22:51.631938 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6efd-account-create-54tm2" Nov 22 12:22:52 crc kubenswrapper[4772]: I1122 12:22:52.218505 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6efd-account-create-54tm2"] Nov 22 12:22:53 crc kubenswrapper[4772]: I1122 12:22:53.151558 4772 generic.go:334] "Generic (PLEG): container finished" podID="a92c433c-6bc6-4dbe-91a0-69a74202f6ab" containerID="005d2caae5d2920983d56de9a0b7dc5e5e868a1e28342200bef559eeaaccc522" exitCode=0 Nov 22 12:22:53 crc kubenswrapper[4772]: I1122 12:22:53.151609 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6efd-account-create-54tm2" event={"ID":"a92c433c-6bc6-4dbe-91a0-69a74202f6ab","Type":"ContainerDied","Data":"005d2caae5d2920983d56de9a0b7dc5e5e868a1e28342200bef559eeaaccc522"} Nov 22 12:22:53 crc kubenswrapper[4772]: I1122 12:22:53.151637 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6efd-account-create-54tm2" event={"ID":"a92c433c-6bc6-4dbe-91a0-69a74202f6ab","Type":"ContainerStarted","Data":"7125c3da5e60694b2c457f10855c97eea1ee2207fc6415ec876c2536a5e7f0f9"} Nov 22 12:22:54 crc kubenswrapper[4772]: I1122 12:22:54.657918 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6efd-account-create-54tm2" Nov 22 12:22:54 crc kubenswrapper[4772]: I1122 12:22:54.747485 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx8wb\" (UniqueName: \"kubernetes.io/projected/a92c433c-6bc6-4dbe-91a0-69a74202f6ab-kube-api-access-nx8wb\") pod \"a92c433c-6bc6-4dbe-91a0-69a74202f6ab\" (UID: \"a92c433c-6bc6-4dbe-91a0-69a74202f6ab\") " Nov 22 12:22:54 crc kubenswrapper[4772]: I1122 12:22:54.762958 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92c433c-6bc6-4dbe-91a0-69a74202f6ab-kube-api-access-nx8wb" (OuterVolumeSpecName: "kube-api-access-nx8wb") pod "a92c433c-6bc6-4dbe-91a0-69a74202f6ab" (UID: "a92c433c-6bc6-4dbe-91a0-69a74202f6ab"). InnerVolumeSpecName "kube-api-access-nx8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:22:54 crc kubenswrapper[4772]: I1122 12:22:54.852183 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx8wb\" (UniqueName: \"kubernetes.io/projected/a92c433c-6bc6-4dbe-91a0-69a74202f6ab-kube-api-access-nx8wb\") on node \"crc\" DevicePath \"\"" Nov 22 12:22:55 crc kubenswrapper[4772]: I1122 12:22:55.176180 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6efd-account-create-54tm2" event={"ID":"a92c433c-6bc6-4dbe-91a0-69a74202f6ab","Type":"ContainerDied","Data":"7125c3da5e60694b2c457f10855c97eea1ee2207fc6415ec876c2536a5e7f0f9"} Nov 22 12:22:55 crc kubenswrapper[4772]: I1122 12:22:55.176218 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6efd-account-create-54tm2" Nov 22 12:22:55 crc kubenswrapper[4772]: I1122 12:22:55.176228 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7125c3da5e60694b2c457f10855c97eea1ee2207fc6415ec876c2536a5e7f0f9" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.628076 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-7q8jn"] Nov 22 12:22:56 crc kubenswrapper[4772]: E1122 12:22:56.629334 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92c433c-6bc6-4dbe-91a0-69a74202f6ab" containerName="mariadb-account-create" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.629358 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92c433c-6bc6-4dbe-91a0-69a74202f6ab" containerName="mariadb-account-create" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.629773 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92c433c-6bc6-4dbe-91a0-69a74202f6ab" containerName="mariadb-account-create" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.630957 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.634547 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.634703 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.634787 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-w48dp" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.648010 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-7q8jn"] Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.803931 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzplj\" (UniqueName: \"kubernetes.io/projected/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-kube-api-access-tzplj\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.804018 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-combined-ca-bundle\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.804469 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-scripts\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.804663 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-config-data\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.907494 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-config-data\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.907612 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzplj\" (UniqueName: \"kubernetes.io/projected/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-kube-api-access-tzplj\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.907652 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-combined-ca-bundle\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.907746 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-scripts\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.923064 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-combined-ca-bundle\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.923038 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-scripts\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.923607 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-config-data\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.934186 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzplj\" (UniqueName: \"kubernetes.io/projected/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-kube-api-access-tzplj\") pod \"aodh-db-sync-7q8jn\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:56 crc kubenswrapper[4772]: I1122 12:22:56.957837 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:22:57 crc kubenswrapper[4772]: I1122 12:22:57.529393 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-7q8jn"] Nov 22 12:22:58 crc kubenswrapper[4772]: I1122 12:22:58.216614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7q8jn" event={"ID":"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65","Type":"ContainerStarted","Data":"845bff18cc21b9c6eed797617e50d211474b3482570e1c4086bb7ac51c888356"} Nov 22 12:22:59 crc kubenswrapper[4772]: I1122 12:22:59.926699 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.053953 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4946-account-create-fk7nf"] Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.064639 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4946-account-create-fk7nf"] Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.445850 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca" path="/var/lib/kubelet/pods/2fb7733e-8af3-4d1b-a33c-a5a45ce5a3ca/volumes" Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.533681 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.533778 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.533845 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.534809 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e00768099367f2f555dffd8edb69c3a4098741a37029cc7349a1e6bb1cbc5a1"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:23:01 crc kubenswrapper[4772]: I1122 12:23:01.534929 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://4e00768099367f2f555dffd8edb69c3a4098741a37029cc7349a1e6bb1cbc5a1" gracePeriod=600 Nov 22 12:23:02 crc kubenswrapper[4772]: I1122 12:23:02.284991 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="4e00768099367f2f555dffd8edb69c3a4098741a37029cc7349a1e6bb1cbc5a1" exitCode=0 Nov 22 12:23:02 crc kubenswrapper[4772]: I1122 12:23:02.285090 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"4e00768099367f2f555dffd8edb69c3a4098741a37029cc7349a1e6bb1cbc5a1"} Nov 22 12:23:02 crc kubenswrapper[4772]: I1122 12:23:02.286314 4772 scope.go:117] "RemoveContainer" containerID="0acc780374533c91fb6e94ed6fa4eb88f1ad8ecfc715db40059fc9377d70e080" Nov 22 12:23:03 crc kubenswrapper[4772]: I1122 12:23:03.304169 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7q8jn" event={"ID":"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65","Type":"ContainerStarted","Data":"ce10a8ed902fef7685fd59e725fb4f402a78a9b2cae51ce1984b40f1e402438a"} Nov 22 12:23:03 crc kubenswrapper[4772]: I1122 12:23:03.311444 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014"} Nov 22 12:23:03 crc kubenswrapper[4772]: I1122 12:23:03.355605 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-7q8jn" podStartSLOduration=2.93194293 podStartE2EDuration="7.355558379s" podCreationTimestamp="2025-11-22 12:22:56 +0000 UTC" firstStartedPulling="2025-11-22 12:22:57.548394632 +0000 UTC m=+6297.787839136" lastFinishedPulling="2025-11-22 12:23:01.972010051 +0000 UTC m=+6302.211454585" observedRunningTime="2025-11-22 12:23:03.337371546 +0000 UTC m=+6303.576816040" watchObservedRunningTime="2025-11-22 12:23:03.355558379 +0000 UTC m=+6303.595002913" Nov 22 12:23:05 crc kubenswrapper[4772]: I1122 12:23:05.341439 4772 generic.go:334] "Generic (PLEG): container finished" podID="f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" containerID="ce10a8ed902fef7685fd59e725fb4f402a78a9b2cae51ce1984b40f1e402438a" exitCode=0 Nov 22 12:23:05 crc kubenswrapper[4772]: I1122 12:23:05.341528 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7q8jn" event={"ID":"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65","Type":"ContainerDied","Data":"ce10a8ed902fef7685fd59e725fb4f402a78a9b2cae51ce1984b40f1e402438a"} Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.831339 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.886682 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-scripts\") pod \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.886851 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-combined-ca-bundle\") pod \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.886884 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-config-data\") pod \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.886932 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzplj\" (UniqueName: \"kubernetes.io/projected/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-kube-api-access-tzplj\") pod \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\" (UID: \"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65\") " Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.896942 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-scripts" (OuterVolumeSpecName: "scripts") pod "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" (UID: "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.897115 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-kube-api-access-tzplj" (OuterVolumeSpecName: "kube-api-access-tzplj") pod "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" (UID: "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65"). InnerVolumeSpecName "kube-api-access-tzplj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.931771 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" (UID: "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.945147 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-config-data" (OuterVolumeSpecName: "config-data") pod "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" (UID: "f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.990709 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.990752 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.990765 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzplj\" (UniqueName: \"kubernetes.io/projected/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-kube-api-access-tzplj\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:06 crc kubenswrapper[4772]: I1122 12:23:06.990781 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:07 crc kubenswrapper[4772]: I1122 12:23:07.370471 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7q8jn" event={"ID":"f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65","Type":"ContainerDied","Data":"845bff18cc21b9c6eed797617e50d211474b3482570e1c4086bb7ac51c888356"} Nov 22 12:23:07 crc kubenswrapper[4772]: I1122 12:23:07.370522 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7q8jn" Nov 22 12:23:07 crc kubenswrapper[4772]: I1122 12:23:07.370526 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845bff18cc21b9c6eed797617e50d211474b3482570e1c4086bb7ac51c888356" Nov 22 12:23:09 crc kubenswrapper[4772]: I1122 12:23:09.040738 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hwv2m"] Nov 22 12:23:09 crc kubenswrapper[4772]: I1122 12:23:09.050630 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hwv2m"] Nov 22 12:23:09 crc kubenswrapper[4772]: I1122 12:23:09.428421 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b03d07-82d5-41b8-852c-79757a1f76c2" path="/var/lib/kubelet/pods/f7b03d07-82d5-41b8-852c-79757a1f76c2/volumes" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.760318 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 12:23:11 crc kubenswrapper[4772]: E1122 12:23:11.762150 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" containerName="aodh-db-sync" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.762205 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" containerName="aodh-db-sync" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.762706 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" containerName="aodh-db-sync" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.775590 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.780305 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.780383 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-w48dp" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.780579 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.790262 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.843757 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-combined-ca-bundle\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.843902 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-config-data\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.843943 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-scripts\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.844257 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwvs\" (UniqueName: \"kubernetes.io/projected/baab6010-9072-4e34-8a19-afd2bf22ebce-kube-api-access-nwwvs\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.946572 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwwvs\" (UniqueName: \"kubernetes.io/projected/baab6010-9072-4e34-8a19-afd2bf22ebce-kube-api-access-nwwvs\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.946647 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-combined-ca-bundle\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.946718 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-config-data\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.946747 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-scripts\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.954938 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-scripts\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.955600 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-combined-ca-bundle\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.956951 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baab6010-9072-4e34-8a19-afd2bf22ebce-config-data\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:11 crc kubenswrapper[4772]: I1122 12:23:11.974270 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwwvs\" (UniqueName: \"kubernetes.io/projected/baab6010-9072-4e34-8a19-afd2bf22ebce-kube-api-access-nwwvs\") pod \"aodh-0\" (UID: \"baab6010-9072-4e34-8a19-afd2bf22ebce\") " pod="openstack/aodh-0" Nov 22 12:23:12 crc kubenswrapper[4772]: I1122 12:23:12.119491 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 12:23:12 crc kubenswrapper[4772]: I1122 12:23:12.706591 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 12:23:13 crc kubenswrapper[4772]: I1122 12:23:13.458005 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"baab6010-9072-4e34-8a19-afd2bf22ebce","Type":"ContainerStarted","Data":"cad37f7d455f49804c00e9d9c394d038f57316b646805d9867ff43087daacd64"} Nov 22 12:23:13 crc kubenswrapper[4772]: I1122 12:23:13.458569 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"baab6010-9072-4e34-8a19-afd2bf22ebce","Type":"ContainerStarted","Data":"4b00dff9b5a65b488efe230ab5bc6e09720546cc8bfdedb6d2e16130656daf0a"} Nov 22 12:23:13 crc kubenswrapper[4772]: I1122 12:23:13.818933 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:23:13 crc kubenswrapper[4772]: I1122 12:23:13.819672 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-central-agent" containerID="cri-o://aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f" gracePeriod=30 Nov 22 12:23:13 crc kubenswrapper[4772]: I1122 12:23:13.819801 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="proxy-httpd" containerID="cri-o://17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f" gracePeriod=30 Nov 22 12:23:13 crc kubenswrapper[4772]: I1122 12:23:13.819853 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="sg-core" containerID="cri-o://22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14" gracePeriod=30 Nov 22 12:23:13 crc kubenswrapper[4772]: I1122 12:23:13.819999 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-notification-agent" containerID="cri-o://0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a" gracePeriod=30 Nov 22 12:23:14 crc kubenswrapper[4772]: I1122 12:23:14.491785 4772 generic.go:334] "Generic (PLEG): container finished" podID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerID="17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f" exitCode=0 Nov 22 12:23:14 crc kubenswrapper[4772]: I1122 12:23:14.492279 4772 generic.go:334] "Generic (PLEG): container finished" podID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerID="22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14" exitCode=2 Nov 22 12:23:14 crc kubenswrapper[4772]: I1122 12:23:14.492288 4772 generic.go:334] "Generic (PLEG): container finished" podID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerID="aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f" exitCode=0 Nov 22 12:23:14 crc kubenswrapper[4772]: I1122 12:23:14.492311 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerDied","Data":"17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f"} Nov 22 12:23:14 crc kubenswrapper[4772]: I1122 12:23:14.492338 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerDied","Data":"22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14"} Nov 22 12:23:14 crc kubenswrapper[4772]: I1122 12:23:14.492350 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerDied","Data":"aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f"} Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.162129 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.242147 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-scripts\") pod \"58ba43f9-5062-4b50-b5c3-14bcf3742218\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.242293 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-config-data\") pod \"58ba43f9-5062-4b50-b5c3-14bcf3742218\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.242486 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-sg-core-conf-yaml\") pod \"58ba43f9-5062-4b50-b5c3-14bcf3742218\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.242531 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-run-httpd\") pod \"58ba43f9-5062-4b50-b5c3-14bcf3742218\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.242610 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxb9c\" (UniqueName: \"kubernetes.io/projected/58ba43f9-5062-4b50-b5c3-14bcf3742218-kube-api-access-dxb9c\") pod \"58ba43f9-5062-4b50-b5c3-14bcf3742218\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.242703 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-combined-ca-bundle\") pod \"58ba43f9-5062-4b50-b5c3-14bcf3742218\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.242812 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-log-httpd\") pod \"58ba43f9-5062-4b50-b5c3-14bcf3742218\" (UID: \"58ba43f9-5062-4b50-b5c3-14bcf3742218\") " Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.244864 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "58ba43f9-5062-4b50-b5c3-14bcf3742218" (UID: "58ba43f9-5062-4b50-b5c3-14bcf3742218"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.251175 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-scripts" (OuterVolumeSpecName: "scripts") pod "58ba43f9-5062-4b50-b5c3-14bcf3742218" (UID: "58ba43f9-5062-4b50-b5c3-14bcf3742218"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.252419 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "58ba43f9-5062-4b50-b5c3-14bcf3742218" (UID: "58ba43f9-5062-4b50-b5c3-14bcf3742218"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.267982 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ba43f9-5062-4b50-b5c3-14bcf3742218-kube-api-access-dxb9c" (OuterVolumeSpecName: "kube-api-access-dxb9c") pod "58ba43f9-5062-4b50-b5c3-14bcf3742218" (UID: "58ba43f9-5062-4b50-b5c3-14bcf3742218"). InnerVolumeSpecName "kube-api-access-dxb9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.319789 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "58ba43f9-5062-4b50-b5c3-14bcf3742218" (UID: "58ba43f9-5062-4b50-b5c3-14bcf3742218"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.345640 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.346034 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.346079 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxb9c\" (UniqueName: \"kubernetes.io/projected/58ba43f9-5062-4b50-b5c3-14bcf3742218-kube-api-access-dxb9c\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.346093 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ba43f9-5062-4b50-b5c3-14bcf3742218-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.346101 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.347564 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58ba43f9-5062-4b50-b5c3-14bcf3742218" (UID: "58ba43f9-5062-4b50-b5c3-14bcf3742218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.371191 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-config-data" (OuterVolumeSpecName: "config-data") pod "58ba43f9-5062-4b50-b5c3-14bcf3742218" (UID: "58ba43f9-5062-4b50-b5c3-14bcf3742218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.447992 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.448034 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ba43f9-5062-4b50-b5c3-14bcf3742218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.506281 4772 scope.go:117] "RemoveContainer" containerID="12c7e2698d79fbef5b57f5a9d40db2c7289cf3c1463271b0a5d1bb124b68a498" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.508999 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"baab6010-9072-4e34-8a19-afd2bf22ebce","Type":"ContainerStarted","Data":"44ec7e4fcacd5d45a2d1710b88693b0f905e4dc9a319d30b84a27cdbb2ce197b"} Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.513348 4772 generic.go:334] "Generic (PLEG): container finished" podID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerID="0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a" exitCode=0 Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.513391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerDied","Data":"0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a"} Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.513415 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ba43f9-5062-4b50-b5c3-14bcf3742218","Type":"ContainerDied","Data":"66a9bc074b8c3a35d500162ea20aa2da891e233f351cb66eaa46bfddb86020e6"} Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.513433 4772 scope.go:117] "RemoveContainer" containerID="17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.513628 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.543651 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.555423 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.577415 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.578043 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="sg-core" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578089 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="sg-core" Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.578103 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="proxy-httpd" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578111 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="proxy-httpd" Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.578127 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-central-agent" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578135 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-central-agent" Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.578170 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-notification-agent" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578177 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-notification-agent" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578433 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="sg-core" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578464 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-central-agent" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578478 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="proxy-httpd" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.578498 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" containerName="ceilometer-notification-agent" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.581924 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.589228 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.589675 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.607270 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.615442 4772 scope.go:117] "RemoveContainer" containerID="894d23048ff0a132aa4eca626b6a829f8777ad840d9d30544c27f62b5e810994" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.639339 4772 scope.go:117] "RemoveContainer" containerID="22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.652796 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.652871 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-scripts\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.652950 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rrx\" (UniqueName: \"kubernetes.io/projected/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-kube-api-access-m2rrx\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.652981 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.653028 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.653143 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-config-data\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.653169 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.659393 4772 scope.go:117] "RemoveContainer" containerID="c7908ab27ab2525cd394455e4e8089e30bd929a25cfe430274143a57b4c8c36f" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.677840 4772 scope.go:117] "RemoveContainer" containerID="0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.708168 4772 scope.go:117] "RemoveContainer" containerID="3bfebb8c105b234595fdd402c4f043a886a0ce4db74468851391059f2c9a7e4b" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.726676 4772 scope.go:117] "RemoveContainer" containerID="aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.754913 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rrx\" (UniqueName: \"kubernetes.io/projected/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-kube-api-access-m2rrx\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.754980 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.755036 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.755203 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-config-data\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.755228 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.755297 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.755364 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-scripts\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.756183 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.757204 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.762065 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.762124 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.764100 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-scripts\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.764696 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-config-data\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.772383 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rrx\" (UniqueName: \"kubernetes.io/projected/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-kube-api-access-m2rrx\") pod \"ceilometer-0\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " pod="openstack/ceilometer-0" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.785619 4772 scope.go:117] "RemoveContainer" containerID="17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f" Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.786024 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f\": container with ID starting with 17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f not found: ID does not exist" containerID="17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.786077 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f"} err="failed to get container status \"17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f\": rpc error: code = NotFound desc = could not find container \"17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f\": container with ID starting with 17f752857092ce6dd4c5992e178cdf58b1e75b4ef4d64f3b0ef90a1c4246341f not found: ID does not exist" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.786104 4772 scope.go:117] "RemoveContainer" containerID="22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14" Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.786752 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14\": container with ID starting with 22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14 not found: ID does not exist" containerID="22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.786810 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14"} err="failed to get container status \"22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14\": rpc error: code = NotFound desc = could not find container \"22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14\": container with ID starting with 22ad61f673cf191a0445b79fa4372173bea536cebf50c2c42aa258045bb2ce14 not found: ID does not exist" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.786839 4772 scope.go:117] "RemoveContainer" containerID="0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a" Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.787381 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a\": container with ID starting with 0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a not found: ID does not exist" containerID="0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.787409 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a"} err="failed to get container status \"0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a\": rpc error: code = NotFound desc = could not find container \"0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a\": container with ID starting with 0f25e1836fd456df8a691fe701520ed962d92b5599a2cc5ae2e3407550e7707a not found: ID does not exist" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.787425 4772 scope.go:117] "RemoveContainer" containerID="aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f" Nov 22 12:23:15 crc kubenswrapper[4772]: E1122 12:23:15.787930 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f\": container with ID starting with aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f not found: ID does not exist" containerID="aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.787956 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f"} err="failed to get container status \"aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f\": rpc error: code = NotFound desc = could not find container \"aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f\": container with ID starting with aeae5b4ff8f79df9bbaad86debef6b2b05a9ab900d8fa3091645fdcdf58bed8f not found: ID does not exist" Nov 22 12:23:15 crc kubenswrapper[4772]: I1122 12:23:15.922407 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:23:16 crc kubenswrapper[4772]: I1122 12:23:16.508189 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:23:17 crc kubenswrapper[4772]: I1122 12:23:17.426182 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ba43f9-5062-4b50-b5c3-14bcf3742218" path="/var/lib/kubelet/pods/58ba43f9-5062-4b50-b5c3-14bcf3742218/volumes" Nov 22 12:23:17 crc kubenswrapper[4772]: I1122 12:23:17.538344 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"baab6010-9072-4e34-8a19-afd2bf22ebce","Type":"ContainerStarted","Data":"368297c65a0f2472d1611d8c21bb0d85d34fbcd875af1dbb2a76969664123b6d"} Nov 22 12:23:17 crc kubenswrapper[4772]: I1122 12:23:17.541525 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerStarted","Data":"4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982"} Nov 22 12:23:17 crc kubenswrapper[4772]: I1122 12:23:17.541614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerStarted","Data":"846b7d02ee5a112c02e7e410c5881d4a22164d29093b3bd2cd6741a322f8b43b"} Nov 22 12:23:19 crc kubenswrapper[4772]: I1122 12:23:19.586292 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerStarted","Data":"15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d"} Nov 22 12:23:19 crc kubenswrapper[4772]: I1122 12:23:19.587662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerStarted","Data":"3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26"} Nov 22 12:23:19 crc kubenswrapper[4772]: I1122 12:23:19.594751 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"baab6010-9072-4e34-8a19-afd2bf22ebce","Type":"ContainerStarted","Data":"f059bbc9ae8e8d6e264174d3e42c2b738debf669723d2faf35cb3722b28a3fc6"} Nov 22 12:23:19 crc kubenswrapper[4772]: I1122 12:23:19.624876 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.957197177 podStartE2EDuration="8.62485666s" podCreationTimestamp="2025-11-22 12:23:11 +0000 UTC" firstStartedPulling="2025-11-22 12:23:12.721152392 +0000 UTC m=+6312.960596886" lastFinishedPulling="2025-11-22 12:23:18.388811875 +0000 UTC m=+6318.628256369" observedRunningTime="2025-11-22 12:23:19.61883002 +0000 UTC m=+6319.858274514" watchObservedRunningTime="2025-11-22 12:23:19.62485666 +0000 UTC m=+6319.864301144" Nov 22 12:23:22 crc kubenswrapper[4772]: I1122 12:23:22.646145 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerStarted","Data":"b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab"} Nov 22 12:23:22 crc kubenswrapper[4772]: I1122 12:23:22.649012 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 12:23:22 crc kubenswrapper[4772]: I1122 12:23:22.696430 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9447345179999997 podStartE2EDuration="7.696368834s" podCreationTimestamp="2025-11-22 12:23:15 +0000 UTC" firstStartedPulling="2025-11-22 12:23:16.579517278 +0000 UTC m=+6316.818961772" lastFinishedPulling="2025-11-22 12:23:21.331151554 +0000 UTC m=+6321.570596088" observedRunningTime="2025-11-22 12:23:22.682724924 +0000 UTC m=+6322.922169428" watchObservedRunningTime="2025-11-22 12:23:22.696368834 +0000 UTC m=+6322.935813338" Nov 22 12:23:26 crc kubenswrapper[4772]: I1122 12:23:26.697991 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-n5877"] Nov 22 12:23:26 crc kubenswrapper[4772]: I1122 12:23:26.701858 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5877" Nov 22 12:23:26 crc kubenswrapper[4772]: I1122 12:23:26.718801 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-n5877"] Nov 22 12:23:26 crc kubenswrapper[4772]: I1122 12:23:26.876497 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsh6g\" (UniqueName: \"kubernetes.io/projected/785e25f1-796b-4d4d-af36-cc9c0e3ea31c-kube-api-access-rsh6g\") pod \"manila-db-create-n5877\" (UID: \"785e25f1-796b-4d4d-af36-cc9c0e3ea31c\") " pod="openstack/manila-db-create-n5877" Nov 22 12:23:26 crc kubenswrapper[4772]: I1122 12:23:26.978929 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsh6g\" (UniqueName: \"kubernetes.io/projected/785e25f1-796b-4d4d-af36-cc9c0e3ea31c-kube-api-access-rsh6g\") pod \"manila-db-create-n5877\" (UID: \"785e25f1-796b-4d4d-af36-cc9c0e3ea31c\") " pod="openstack/manila-db-create-n5877" Nov 22 12:23:27 crc kubenswrapper[4772]: I1122 12:23:27.004528 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsh6g\" (UniqueName: \"kubernetes.io/projected/785e25f1-796b-4d4d-af36-cc9c0e3ea31c-kube-api-access-rsh6g\") pod \"manila-db-create-n5877\" (UID: \"785e25f1-796b-4d4d-af36-cc9c0e3ea31c\") " pod="openstack/manila-db-create-n5877" Nov 22 12:23:27 crc kubenswrapper[4772]: I1122 12:23:27.030931 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5877" Nov 22 12:23:27 crc kubenswrapper[4772]: I1122 12:23:27.735884 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-n5877"] Nov 22 12:23:28 crc kubenswrapper[4772]: I1122 12:23:28.735291 4772 generic.go:334] "Generic (PLEG): container finished" podID="785e25f1-796b-4d4d-af36-cc9c0e3ea31c" containerID="4a88317ae96db92ca84848ad077241aed98bc221b0b9635fc1c594b791e88024" exitCode=0 Nov 22 12:23:28 crc kubenswrapper[4772]: I1122 12:23:28.735368 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5877" event={"ID":"785e25f1-796b-4d4d-af36-cc9c0e3ea31c","Type":"ContainerDied","Data":"4a88317ae96db92ca84848ad077241aed98bc221b0b9635fc1c594b791e88024"} Nov 22 12:23:28 crc kubenswrapper[4772]: I1122 12:23:28.735756 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5877" event={"ID":"785e25f1-796b-4d4d-af36-cc9c0e3ea31c","Type":"ContainerStarted","Data":"cd0c56cfa405a5ba64e421940643889aeb908a441532b44b5f6b157d522f9265"} Nov 22 12:23:30 crc kubenswrapper[4772]: I1122 12:23:30.206814 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5877" Nov 22 12:23:30 crc kubenswrapper[4772]: I1122 12:23:30.261609 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsh6g\" (UniqueName: \"kubernetes.io/projected/785e25f1-796b-4d4d-af36-cc9c0e3ea31c-kube-api-access-rsh6g\") pod \"785e25f1-796b-4d4d-af36-cc9c0e3ea31c\" (UID: \"785e25f1-796b-4d4d-af36-cc9c0e3ea31c\") " Nov 22 12:23:30 crc kubenswrapper[4772]: I1122 12:23:30.269622 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785e25f1-796b-4d4d-af36-cc9c0e3ea31c-kube-api-access-rsh6g" (OuterVolumeSpecName: "kube-api-access-rsh6g") pod "785e25f1-796b-4d4d-af36-cc9c0e3ea31c" (UID: "785e25f1-796b-4d4d-af36-cc9c0e3ea31c"). InnerVolumeSpecName "kube-api-access-rsh6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:23:30 crc kubenswrapper[4772]: I1122 12:23:30.363635 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsh6g\" (UniqueName: \"kubernetes.io/projected/785e25f1-796b-4d4d-af36-cc9c0e3ea31c-kube-api-access-rsh6g\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:30 crc kubenswrapper[4772]: I1122 12:23:30.769677 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-n5877" event={"ID":"785e25f1-796b-4d4d-af36-cc9c0e3ea31c","Type":"ContainerDied","Data":"cd0c56cfa405a5ba64e421940643889aeb908a441532b44b5f6b157d522f9265"} Nov 22 12:23:30 crc kubenswrapper[4772]: I1122 12:23:30.769716 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd0c56cfa405a5ba64e421940643889aeb908a441532b44b5f6b157d522f9265" Nov 22 12:23:30 crc kubenswrapper[4772]: I1122 12:23:30.769795 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-n5877" Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.808334 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-d683-account-create-t8jwr"] Nov 22 12:23:36 crc kubenswrapper[4772]: E1122 12:23:36.809916 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785e25f1-796b-4d4d-af36-cc9c0e3ea31c" containerName="mariadb-database-create" Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.809935 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="785e25f1-796b-4d4d-af36-cc9c0e3ea31c" containerName="mariadb-database-create" Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.811537 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="785e25f1-796b-4d4d-af36-cc9c0e3ea31c" containerName="mariadb-database-create" Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.812986 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d683-account-create-t8jwr" Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.816424 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.840698 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d683-account-create-t8jwr"] Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.854802 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m5zb\" (UniqueName: \"kubernetes.io/projected/a5660a09-2828-4375-9257-39c260306ef4-kube-api-access-2m5zb\") pod \"manila-d683-account-create-t8jwr\" (UID: \"a5660a09-2828-4375-9257-39c260306ef4\") " pod="openstack/manila-d683-account-create-t8jwr" Nov 22 12:23:36 crc kubenswrapper[4772]: I1122 12:23:36.957805 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m5zb\" (UniqueName: \"kubernetes.io/projected/a5660a09-2828-4375-9257-39c260306ef4-kube-api-access-2m5zb\") pod \"manila-d683-account-create-t8jwr\" (UID: \"a5660a09-2828-4375-9257-39c260306ef4\") " pod="openstack/manila-d683-account-create-t8jwr" Nov 22 12:23:37 crc kubenswrapper[4772]: I1122 12:23:37.001187 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m5zb\" (UniqueName: \"kubernetes.io/projected/a5660a09-2828-4375-9257-39c260306ef4-kube-api-access-2m5zb\") pod \"manila-d683-account-create-t8jwr\" (UID: \"a5660a09-2828-4375-9257-39c260306ef4\") " pod="openstack/manila-d683-account-create-t8jwr" Nov 22 12:23:37 crc kubenswrapper[4772]: I1122 12:23:37.155407 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d683-account-create-t8jwr" Nov 22 12:23:37 crc kubenswrapper[4772]: I1122 12:23:37.722872 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d683-account-create-t8jwr"] Nov 22 12:23:37 crc kubenswrapper[4772]: W1122 12:23:37.732515 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5660a09_2828_4375_9257_39c260306ef4.slice/crio-7a54a78d00c57c129d9b0fad9acfb85c50567c0f3e65050512b2627bd3945a93 WatchSource:0}: Error finding container 7a54a78d00c57c129d9b0fad9acfb85c50567c0f3e65050512b2627bd3945a93: Status 404 returned error can't find the container with id 7a54a78d00c57c129d9b0fad9acfb85c50567c0f3e65050512b2627bd3945a93 Nov 22 12:23:37 crc kubenswrapper[4772]: I1122 12:23:37.848745 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d683-account-create-t8jwr" event={"ID":"a5660a09-2828-4375-9257-39c260306ef4","Type":"ContainerStarted","Data":"7a54a78d00c57c129d9b0fad9acfb85c50567c0f3e65050512b2627bd3945a93"} Nov 22 12:23:38 crc kubenswrapper[4772]: I1122 12:23:38.868432 4772 generic.go:334] "Generic (PLEG): container finished" podID="a5660a09-2828-4375-9257-39c260306ef4" containerID="2c99d0074e4f1e4bcc7a47092f770ec1294698d0778e6bfe394cd98fce6e6df6" exitCode=0 Nov 22 12:23:38 crc kubenswrapper[4772]: I1122 12:23:38.868501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d683-account-create-t8jwr" event={"ID":"a5660a09-2828-4375-9257-39c260306ef4","Type":"ContainerDied","Data":"2c99d0074e4f1e4bcc7a47092f770ec1294698d0778e6bfe394cd98fce6e6df6"} Nov 22 12:23:40 crc kubenswrapper[4772]: I1122 12:23:40.361497 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d683-account-create-t8jwr" Nov 22 12:23:40 crc kubenswrapper[4772]: I1122 12:23:40.468188 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m5zb\" (UniqueName: \"kubernetes.io/projected/a5660a09-2828-4375-9257-39c260306ef4-kube-api-access-2m5zb\") pod \"a5660a09-2828-4375-9257-39c260306ef4\" (UID: \"a5660a09-2828-4375-9257-39c260306ef4\") " Nov 22 12:23:40 crc kubenswrapper[4772]: I1122 12:23:40.478403 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5660a09-2828-4375-9257-39c260306ef4-kube-api-access-2m5zb" (OuterVolumeSpecName: "kube-api-access-2m5zb") pod "a5660a09-2828-4375-9257-39c260306ef4" (UID: "a5660a09-2828-4375-9257-39c260306ef4"). InnerVolumeSpecName "kube-api-access-2m5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:23:40 crc kubenswrapper[4772]: I1122 12:23:40.572991 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m5zb\" (UniqueName: \"kubernetes.io/projected/a5660a09-2828-4375-9257-39c260306ef4-kube-api-access-2m5zb\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:40 crc kubenswrapper[4772]: I1122 12:23:40.896433 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d683-account-create-t8jwr" event={"ID":"a5660a09-2828-4375-9257-39c260306ef4","Type":"ContainerDied","Data":"7a54a78d00c57c129d9b0fad9acfb85c50567c0f3e65050512b2627bd3945a93"} Nov 22 12:23:40 crc kubenswrapper[4772]: I1122 12:23:40.896494 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a54a78d00c57c129d9b0fad9acfb85c50567c0f3e65050512b2627bd3945a93" Nov 22 12:23:40 crc kubenswrapper[4772]: I1122 12:23:40.896548 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d683-account-create-t8jwr" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.190148 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-m229v"] Nov 22 12:23:42 crc kubenswrapper[4772]: E1122 12:23:42.191422 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5660a09-2828-4375-9257-39c260306ef4" containerName="mariadb-account-create" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.191445 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5660a09-2828-4375-9257-39c260306ef4" containerName="mariadb-account-create" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.191750 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5660a09-2828-4375-9257-39c260306ef4" containerName="mariadb-account-create" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.192854 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.195833 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.201997 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-zs8m7" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.216073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-combined-ca-bundle\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.216152 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-job-config-data\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.216233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-config-data\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.216323 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/08576e88-9d75-4ec1-9388-64ed8e991e48-kube-api-access-vlzjm\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.220841 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-m229v"] Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.318635 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-combined-ca-bundle\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.318697 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-job-config-data\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.318758 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-config-data\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.318820 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/08576e88-9d75-4ec1-9388-64ed8e991e48-kube-api-access-vlzjm\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.324601 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-combined-ca-bundle\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.326071 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-job-config-data\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.326713 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-config-data\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.348741 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/08576e88-9d75-4ec1-9388-64ed8e991e48-kube-api-access-vlzjm\") pod \"manila-db-sync-m229v\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " pod="openstack/manila-db-sync-m229v" Nov 22 12:23:42 crc kubenswrapper[4772]: I1122 12:23:42.520827 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-m229v" Nov 22 12:23:43 crc kubenswrapper[4772]: I1122 12:23:43.334103 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-m229v"] Nov 22 12:23:43 crc kubenswrapper[4772]: I1122 12:23:43.962790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-m229v" event={"ID":"08576e88-9d75-4ec1-9388-64ed8e991e48","Type":"ContainerStarted","Data":"e221a0e26a4da6e839c5ca91ca3052c09c15edd941029137d5e9a3d57919ea26"} Nov 22 12:23:45 crc kubenswrapper[4772]: I1122 12:23:45.944404 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 12:23:51 crc kubenswrapper[4772]: I1122 12:23:51.093037 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-m229v" event={"ID":"08576e88-9d75-4ec1-9388-64ed8e991e48","Type":"ContainerStarted","Data":"5c9450175cae686c1ae0c954a73b8ef458b6a8f65d8f09ea29c3add4c79f0d57"} Nov 22 12:23:51 crc kubenswrapper[4772]: I1122 12:23:51.117001 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-m229v" podStartSLOduration=2.793620746 podStartE2EDuration="9.116979636s" podCreationTimestamp="2025-11-22 12:23:42 +0000 UTC" firstStartedPulling="2025-11-22 12:23:43.343203786 +0000 UTC m=+6343.582648280" lastFinishedPulling="2025-11-22 12:23:49.666562676 +0000 UTC m=+6349.906007170" observedRunningTime="2025-11-22 12:23:51.115252263 +0000 UTC m=+6351.354696787" watchObservedRunningTime="2025-11-22 12:23:51.116979636 +0000 UTC m=+6351.356424140" Nov 22 12:23:52 crc kubenswrapper[4772]: I1122 12:23:52.108559 4772 generic.go:334] "Generic (PLEG): container finished" podID="08576e88-9d75-4ec1-9388-64ed8e991e48" containerID="5c9450175cae686c1ae0c954a73b8ef458b6a8f65d8f09ea29c3add4c79f0d57" exitCode=0 Nov 22 12:23:52 crc kubenswrapper[4772]: I1122 12:23:52.108663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-m229v" event={"ID":"08576e88-9d75-4ec1-9388-64ed8e991e48","Type":"ContainerDied","Data":"5c9450175cae686c1ae0c954a73b8ef458b6a8f65d8f09ea29c3add4c79f0d57"} Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.654018 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-m229v" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.839503 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-job-config-data\") pod \"08576e88-9d75-4ec1-9388-64ed8e991e48\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.839836 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/08576e88-9d75-4ec1-9388-64ed8e991e48-kube-api-access-vlzjm\") pod \"08576e88-9d75-4ec1-9388-64ed8e991e48\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.839913 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-config-data\") pod \"08576e88-9d75-4ec1-9388-64ed8e991e48\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.840023 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-combined-ca-bundle\") pod \"08576e88-9d75-4ec1-9388-64ed8e991e48\" (UID: \"08576e88-9d75-4ec1-9388-64ed8e991e48\") " Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.853400 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "08576e88-9d75-4ec1-9388-64ed8e991e48" (UID: "08576e88-9d75-4ec1-9388-64ed8e991e48"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.853419 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08576e88-9d75-4ec1-9388-64ed8e991e48-kube-api-access-vlzjm" (OuterVolumeSpecName: "kube-api-access-vlzjm") pod "08576e88-9d75-4ec1-9388-64ed8e991e48" (UID: "08576e88-9d75-4ec1-9388-64ed8e991e48"). InnerVolumeSpecName "kube-api-access-vlzjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.856745 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-config-data" (OuterVolumeSpecName: "config-data") pod "08576e88-9d75-4ec1-9388-64ed8e991e48" (UID: "08576e88-9d75-4ec1-9388-64ed8e991e48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.890305 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08576e88-9d75-4ec1-9388-64ed8e991e48" (UID: "08576e88-9d75-4ec1-9388-64ed8e991e48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.943092 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlzjm\" (UniqueName: \"kubernetes.io/projected/08576e88-9d75-4ec1-9388-64ed8e991e48-kube-api-access-vlzjm\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.943145 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.943159 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:53 crc kubenswrapper[4772]: I1122 12:23:53.943175 4772 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/08576e88-9d75-4ec1-9388-64ed8e991e48-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.136520 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-m229v" event={"ID":"08576e88-9d75-4ec1-9388-64ed8e991e48","Type":"ContainerDied","Data":"e221a0e26a4da6e839c5ca91ca3052c09c15edd941029137d5e9a3d57919ea26"} Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.136561 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e221a0e26a4da6e839c5ca91ca3052c09c15edd941029137d5e9a3d57919ea26" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.136623 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-m229v" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.633612 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 22 12:23:54 crc kubenswrapper[4772]: E1122 12:23:54.634575 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08576e88-9d75-4ec1-9388-64ed8e991e48" containerName="manila-db-sync" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.634597 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="08576e88-9d75-4ec1-9388-64ed8e991e48" containerName="manila-db-sync" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.634923 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="08576e88-9d75-4ec1-9388-64ed8e991e48" containerName="manila-db-sync" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.637198 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.642715 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.642870 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-zs8m7" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.650426 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.650652 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.652143 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.655091 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.657010 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.679156 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.767975 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-config-data\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.768059 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.768122 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-scripts\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.768458 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-scripts\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.768987 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a69df5d-3116-4547-b106-d11e052b96d9-ceph\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769210 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5s7t\" (UniqueName: \"kubernetes.io/projected/1a69df5d-3116-4547-b106-d11e052b96d9-kube-api-access-g5s7t\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769293 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3274781d-cba6-440d-a492-5d9e7cfdb23a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769378 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769438 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a69df5d-3116-4547-b106-d11e052b96d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769471 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769722 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769773 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-config-data\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769793 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1a69df5d-3116-4547-b106-d11e052b96d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.769863 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdq4j\" (UniqueName: \"kubernetes.io/projected/3274781d-cba6-440d-a492-5d9e7cfdb23a-kube-api-access-qdq4j\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.827745 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.846412 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf684ddc5-wksps"] Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.854877 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874426 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5s7t\" (UniqueName: \"kubernetes.io/projected/1a69df5d-3116-4547-b106-d11e052b96d9-kube-api-access-g5s7t\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874526 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3274781d-cba6-440d-a492-5d9e7cfdb23a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874584 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a69df5d-3116-4547-b106-d11e052b96d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874673 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874707 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-config-data\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874760 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1a69df5d-3116-4547-b106-d11e052b96d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874798 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a69df5d-3116-4547-b106-d11e052b96d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874817 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdq4j\" (UniqueName: \"kubernetes.io/projected/3274781d-cba6-440d-a492-5d9e7cfdb23a-kube-api-access-qdq4j\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874851 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3274781d-cba6-440d-a492-5d9e7cfdb23a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874877 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-config-data\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.874927 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.875794 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-scripts\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.876613 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-scripts\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.876662 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a69df5d-3116-4547-b106-d11e052b96d9-ceph\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.879220 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1a69df5d-3116-4547-b106-d11e052b96d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.898568 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-scripts\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.898925 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.898628 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1a69df5d-3116-4547-b106-d11e052b96d9-ceph\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.898672 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf684ddc5-wksps"] Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.898780 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.899762 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-config-data\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.905243 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdq4j\" (UniqueName: \"kubernetes.io/projected/3274781d-cba6-440d-a492-5d9e7cfdb23a-kube-api-access-qdq4j\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.906030 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.906554 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5s7t\" (UniqueName: \"kubernetes.io/projected/1a69df5d-3116-4547-b106-d11e052b96d9-kube-api-access-g5s7t\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.916190 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-config-data\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.926088 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3274781d-cba6-440d-a492-5d9e7cfdb23a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3274781d-cba6-440d-a492-5d9e7cfdb23a\") " pod="openstack/manila-scheduler-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.939496 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a69df5d-3116-4547-b106-d11e052b96d9-scripts\") pod \"manila-share-share1-0\" (UID: \"1a69df5d-3116-4547-b106-d11e052b96d9\") " pod="openstack/manila-share-share1-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.981523 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.982965 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.983381 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvf2\" (UniqueName: \"kubernetes.io/projected/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-kube-api-access-5mvf2\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.983521 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.983547 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-config\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.983616 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.983687 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-dns-svc\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:54 crc kubenswrapper[4772]: I1122 12:23:54.987033 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:54.991277 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.007213 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.015429 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089307 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089371 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-dns-svc\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089413 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4fd\" (UniqueName: \"kubernetes.io/projected/e98833bc-72ed-444d-a7f3-e1c886658154-kube-api-access-nc4fd\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089469 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-config-data-custom\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089511 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089536 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089605 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-scripts\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089625 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-config-data\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089647 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e98833bc-72ed-444d-a7f3-e1c886658154-logs\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089674 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mvf2\" (UniqueName: \"kubernetes.io/projected/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-kube-api-access-5mvf2\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089703 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e98833bc-72ed-444d-a7f3-e1c886658154-etc-machine-id\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.089737 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-config\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.090640 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.091263 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-dns-svc\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.091357 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-config\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.091795 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.115269 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mvf2\" (UniqueName: \"kubernetes.io/projected/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-kube-api-access-5mvf2\") pod \"dnsmasq-dns-6bf684ddc5-wksps\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.191503 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e98833bc-72ed-444d-a7f3-e1c886658154-etc-machine-id\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.191640 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc4fd\" (UniqueName: \"kubernetes.io/projected/e98833bc-72ed-444d-a7f3-e1c886658154-kube-api-access-nc4fd\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.191680 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-config-data-custom\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.191726 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.191798 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-scripts\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.191814 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-config-data\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.191837 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e98833bc-72ed-444d-a7f3-e1c886658154-logs\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.192375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e98833bc-72ed-444d-a7f3-e1c886658154-logs\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.192540 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e98833bc-72ed-444d-a7f3-e1c886658154-etc-machine-id\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.198262 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-config-data-custom\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.199152 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-config-data\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.200873 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-scripts\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.203291 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e98833bc-72ed-444d-a7f3-e1c886658154-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.215618 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc4fd\" (UniqueName: \"kubernetes.io/projected/e98833bc-72ed-444d-a7f3-e1c886658154-kube-api-access-nc4fd\") pod \"manila-api-0\" (UID: \"e98833bc-72ed-444d-a7f3-e1c886658154\") " pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.334257 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.501091 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.788639 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.812876 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:23:55 crc kubenswrapper[4772]: I1122 12:23:55.955759 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 22 12:23:56 crc kubenswrapper[4772]: I1122 12:23:56.119205 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf684ddc5-wksps"] Nov 22 12:23:56 crc kubenswrapper[4772]: I1122 12:23:56.166658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3274781d-cba6-440d-a492-5d9e7cfdb23a","Type":"ContainerStarted","Data":"3d230747b5a05bf420a28acaa95d1ca83e1c2e96a22c208b4e4ec14e64cd71ce"} Nov 22 12:23:56 crc kubenswrapper[4772]: I1122 12:23:56.170634 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" event={"ID":"aa5165a2-1876-4f1e-bfbe-91198cc15ac5","Type":"ContainerStarted","Data":"b64556cdce9ee8163baeaf2134557dbb719e0185d1e622924161b318fba0750b"} Nov 22 12:23:56 crc kubenswrapper[4772]: I1122 12:23:56.172107 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1a69df5d-3116-4547-b106-d11e052b96d9","Type":"ContainerStarted","Data":"dd311d1e0cbdc71895daed4efd516e1de355b04702ba59b7791dcac52e7a6fd0"} Nov 22 12:23:56 crc kubenswrapper[4772]: I1122 12:23:56.451196 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 22 12:23:56 crc kubenswrapper[4772]: W1122 12:23:56.459281 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode98833bc_72ed_444d_a7f3_e1c886658154.slice/crio-0e9b0ead9e4cdee988c7f69a079f910b7def890f86a0d93366678eb7bf344b75 WatchSource:0}: Error finding container 0e9b0ead9e4cdee988c7f69a079f910b7def890f86a0d93366678eb7bf344b75: Status 404 returned error can't find the container with id 0e9b0ead9e4cdee988c7f69a079f910b7def890f86a0d93366678eb7bf344b75 Nov 22 12:23:57 crc kubenswrapper[4772]: I1122 12:23:57.221437 4772 generic.go:334] "Generic (PLEG): container finished" podID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerID="908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418" exitCode=0 Nov 22 12:23:57 crc kubenswrapper[4772]: I1122 12:23:57.222866 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" event={"ID":"aa5165a2-1876-4f1e-bfbe-91198cc15ac5","Type":"ContainerDied","Data":"908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418"} Nov 22 12:23:57 crc kubenswrapper[4772]: I1122 12:23:57.244004 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3274781d-cba6-440d-a492-5d9e7cfdb23a","Type":"ContainerStarted","Data":"6683f78458c477347c244427487988c9c9ecf598e502efa940992d04572232ee"} Nov 22 12:23:57 crc kubenswrapper[4772]: I1122 12:23:57.263316 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e98833bc-72ed-444d-a7f3-e1c886658154","Type":"ContainerStarted","Data":"05842ecdc05f939c38292f081425e5210d4840a245964cf8c39fff5bd248d669"} Nov 22 12:23:57 crc kubenswrapper[4772]: I1122 12:23:57.263367 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e98833bc-72ed-444d-a7f3-e1c886658154","Type":"ContainerStarted","Data":"0e9b0ead9e4cdee988c7f69a079f910b7def890f86a0d93366678eb7bf344b75"} Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.292872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3274781d-cba6-440d-a492-5d9e7cfdb23a","Type":"ContainerStarted","Data":"c24b19897c48d1e7e7e9a9660f2f1f611ada56bf9bb2ebb57b6e9d6669b66c20"} Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.307322 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e98833bc-72ed-444d-a7f3-e1c886658154","Type":"ContainerStarted","Data":"8a53137465c220bc5c2bd0bb77202eefc67b1df71e3542f3a5a5a53e178f9554"} Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.308381 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.312375 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" event={"ID":"aa5165a2-1876-4f1e-bfbe-91198cc15ac5","Type":"ContainerStarted","Data":"ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc"} Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.313332 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.344268 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.650079062 podStartE2EDuration="4.344244046s" podCreationTimestamp="2025-11-22 12:23:54 +0000 UTC" firstStartedPulling="2025-11-22 12:23:55.81268642 +0000 UTC m=+6356.052130914" lastFinishedPulling="2025-11-22 12:23:56.506851404 +0000 UTC m=+6356.746295898" observedRunningTime="2025-11-22 12:23:58.324886808 +0000 UTC m=+6358.564331312" watchObservedRunningTime="2025-11-22 12:23:58.344244046 +0000 UTC m=+6358.583688540" Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.379420 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" podStartSLOduration=4.379397963 podStartE2EDuration="4.379397963s" podCreationTimestamp="2025-11-22 12:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:23:58.34928689 +0000 UTC m=+6358.588731394" watchObservedRunningTime="2025-11-22 12:23:58.379397963 +0000 UTC m=+6358.618842457" Nov 22 12:23:58 crc kubenswrapper[4772]: I1122 12:23:58.395790 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.395766517 podStartE2EDuration="4.395766517s" podCreationTimestamp="2025-11-22 12:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:23:58.36913493 +0000 UTC m=+6358.608579434" watchObservedRunningTime="2025-11-22 12:23:58.395766517 +0000 UTC m=+6358.635211011" Nov 22 12:24:04 crc kubenswrapper[4772]: I1122 12:24:04.383203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1a69df5d-3116-4547-b106-d11e052b96d9","Type":"ContainerStarted","Data":"55d5d6bbffb82001997980110e201bb5eacf691bacacb9400341886cdf177c58"} Nov 22 12:24:04 crc kubenswrapper[4772]: I1122 12:24:04.992556 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 22 12:24:05 crc kubenswrapper[4772]: I1122 12:24:05.336262 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:24:05 crc kubenswrapper[4772]: I1122 12:24:05.477258 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f75b897c-6g6hs"] Nov 22 12:24:05 crc kubenswrapper[4772]: I1122 12:24:05.477356 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1a69df5d-3116-4547-b106-d11e052b96d9","Type":"ContainerStarted","Data":"488c72799e31c803029486e61d7bb62b389b5f76a19132edd19284e3606d10c9"} Nov 22 12:24:05 crc kubenswrapper[4772]: I1122 12:24:05.477675 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" podUID="759669a7-3328-4d4f-b4bb-661824149475" containerName="dnsmasq-dns" containerID="cri-o://67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41" gracePeriod=10 Nov 22 12:24:05 crc kubenswrapper[4772]: I1122 12:24:05.485308 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.09000599 podStartE2EDuration="11.485281647s" podCreationTimestamp="2025-11-22 12:23:54 +0000 UTC" firstStartedPulling="2025-11-22 12:23:55.977229591 +0000 UTC m=+6356.216674085" lastFinishedPulling="2025-11-22 12:24:03.372505228 +0000 UTC m=+6363.611949742" observedRunningTime="2025-11-22 12:24:05.456987329 +0000 UTC m=+6365.696431833" watchObservedRunningTime="2025-11-22 12:24:05.485281647 +0000 UTC m=+6365.724726141" Nov 22 12:24:05 crc kubenswrapper[4772]: I1122 12:24:05.611921 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" podUID="759669a7-3328-4d4f-b4bb-661824149475" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.79:5353: connect: connection refused" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.070023 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.099043 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-sb\") pod \"759669a7-3328-4d4f-b4bb-661824149475\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.099124 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-config\") pod \"759669a7-3328-4d4f-b4bb-661824149475\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.099147 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-dns-svc\") pod \"759669a7-3328-4d4f-b4bb-661824149475\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.099287 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/759669a7-3328-4d4f-b4bb-661824149475-kube-api-access-mn7q6\") pod \"759669a7-3328-4d4f-b4bb-661824149475\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.099316 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-nb\") pod \"759669a7-3328-4d4f-b4bb-661824149475\" (UID: \"759669a7-3328-4d4f-b4bb-661824149475\") " Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.115289 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759669a7-3328-4d4f-b4bb-661824149475-kube-api-access-mn7q6" (OuterVolumeSpecName: "kube-api-access-mn7q6") pod "759669a7-3328-4d4f-b4bb-661824149475" (UID: "759669a7-3328-4d4f-b4bb-661824149475"). InnerVolumeSpecName "kube-api-access-mn7q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.162830 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-config" (OuterVolumeSpecName: "config") pod "759669a7-3328-4d4f-b4bb-661824149475" (UID: "759669a7-3328-4d4f-b4bb-661824149475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.182604 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "759669a7-3328-4d4f-b4bb-661824149475" (UID: "759669a7-3328-4d4f-b4bb-661824149475"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.186637 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "759669a7-3328-4d4f-b4bb-661824149475" (UID: "759669a7-3328-4d4f-b4bb-661824149475"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.186974 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "759669a7-3328-4d4f-b4bb-661824149475" (UID: "759669a7-3328-4d4f-b4bb-661824149475"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.202898 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.202934 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.202944 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.202954 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/759669a7-3328-4d4f-b4bb-661824149475-kube-api-access-mn7q6\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.202967 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/759669a7-3328-4d4f-b4bb-661824149475-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.438535 4772 generic.go:334] "Generic (PLEG): container finished" podID="759669a7-3328-4d4f-b4bb-661824149475" containerID="67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41" exitCode=0 Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.438588 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" event={"ID":"759669a7-3328-4d4f-b4bb-661824149475","Type":"ContainerDied","Data":"67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41"} Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.438619 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.438644 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f75b897c-6g6hs" event={"ID":"759669a7-3328-4d4f-b4bb-661824149475","Type":"ContainerDied","Data":"3f24ff394403b7f3c0276461ac01cf07415db9c2d922cdc004730582da93a53f"} Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.438665 4772 scope.go:117] "RemoveContainer" containerID="67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.463103 4772 scope.go:117] "RemoveContainer" containerID="7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.480397 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f75b897c-6g6hs"] Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.489646 4772 scope.go:117] "RemoveContainer" containerID="67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.489947 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f75b897c-6g6hs"] Nov 22 12:24:06 crc kubenswrapper[4772]: E1122 12:24:06.490085 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41\": container with ID starting with 67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41 not found: ID does not exist" containerID="67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.490138 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41"} err="failed to get container status \"67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41\": rpc error: code = NotFound desc = could not find container \"67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41\": container with ID starting with 67fce81a7e820fcdff71214a30b7a20c77aeda7bc9448955423e69e4aed99c41 not found: ID does not exist" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.490171 4772 scope.go:117] "RemoveContainer" containerID="7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378" Nov 22 12:24:06 crc kubenswrapper[4772]: E1122 12:24:06.490666 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378\": container with ID starting with 7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378 not found: ID does not exist" containerID="7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378" Nov 22 12:24:06 crc kubenswrapper[4772]: I1122 12:24:06.490693 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378"} err="failed to get container status \"7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378\": rpc error: code = NotFound desc = could not find container \"7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378\": container with ID starting with 7144d70742ec6ec629bbc86272ffc69510339c81df202a18dfa826251d72a378 not found: ID does not exist" Nov 22 12:24:07 crc kubenswrapper[4772]: I1122 12:24:07.434434 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759669a7-3328-4d4f-b4bb-661824149475" path="/var/lib/kubelet/pods/759669a7-3328-4d4f-b4bb-661824149475/volumes" Nov 22 12:24:08 crc kubenswrapper[4772]: I1122 12:24:08.317368 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:24:08 crc kubenswrapper[4772]: I1122 12:24:08.317733 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-central-agent" containerID="cri-o://4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982" gracePeriod=30 Nov 22 12:24:08 crc kubenswrapper[4772]: I1122 12:24:08.317884 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="sg-core" containerID="cri-o://15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d" gracePeriod=30 Nov 22 12:24:08 crc kubenswrapper[4772]: I1122 12:24:08.317941 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-notification-agent" containerID="cri-o://3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26" gracePeriod=30 Nov 22 12:24:08 crc kubenswrapper[4772]: I1122 12:24:08.317904 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="proxy-httpd" containerID="cri-o://b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab" gracePeriod=30 Nov 22 12:24:08 crc kubenswrapper[4772]: I1122 12:24:08.483590 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerID="15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d" exitCode=2 Nov 22 12:24:08 crc kubenswrapper[4772]: I1122 12:24:08.483662 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerDied","Data":"15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d"} Nov 22 12:24:09 crc kubenswrapper[4772]: I1122 12:24:09.503031 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerID="b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab" exitCode=0 Nov 22 12:24:09 crc kubenswrapper[4772]: I1122 12:24:09.503583 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerID="4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982" exitCode=0 Nov 22 12:24:09 crc kubenswrapper[4772]: I1122 12:24:09.503144 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerDied","Data":"b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab"} Nov 22 12:24:09 crc kubenswrapper[4772]: I1122 12:24:09.503661 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerDied","Data":"4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982"} Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.116971 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.192394 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-sg-core-conf-yaml\") pod \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.192598 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-config-data\") pod \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.192701 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2rrx\" (UniqueName: \"kubernetes.io/projected/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-kube-api-access-m2rrx\") pod \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.192906 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-scripts\") pod \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.193210 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-combined-ca-bundle\") pod \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.193253 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-run-httpd\") pod \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.193346 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-log-httpd\") pod \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\" (UID: \"e3b447da-3a96-4fb6-a7e7-13c32dbb793c\") " Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.193589 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3b447da-3a96-4fb6-a7e7-13c32dbb793c" (UID: "e3b447da-3a96-4fb6-a7e7-13c32dbb793c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.194134 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3b447da-3a96-4fb6-a7e7-13c32dbb793c" (UID: "e3b447da-3a96-4fb6-a7e7-13c32dbb793c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.194892 4772 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.194928 4772 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.199897 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-scripts" (OuterVolumeSpecName: "scripts") pod "e3b447da-3a96-4fb6-a7e7-13c32dbb793c" (UID: "e3b447da-3a96-4fb6-a7e7-13c32dbb793c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.201162 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-kube-api-access-m2rrx" (OuterVolumeSpecName: "kube-api-access-m2rrx") pod "e3b447da-3a96-4fb6-a7e7-13c32dbb793c" (UID: "e3b447da-3a96-4fb6-a7e7-13c32dbb793c"). InnerVolumeSpecName "kube-api-access-m2rrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.259142 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3b447da-3a96-4fb6-a7e7-13c32dbb793c" (UID: "e3b447da-3a96-4fb6-a7e7-13c32dbb793c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.297221 4772 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.297259 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2rrx\" (UniqueName: \"kubernetes.io/projected/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-kube-api-access-m2rrx\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.297275 4772 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.306601 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3b447da-3a96-4fb6-a7e7-13c32dbb793c" (UID: "e3b447da-3a96-4fb6-a7e7-13c32dbb793c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.343283 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-config-data" (OuterVolumeSpecName: "config-data") pod "e3b447da-3a96-4fb6-a7e7-13c32dbb793c" (UID: "e3b447da-3a96-4fb6-a7e7-13c32dbb793c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.399719 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.399768 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b447da-3a96-4fb6-a7e7-13c32dbb793c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.551348 4772 generic.go:334] "Generic (PLEG): container finished" podID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerID="3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26" exitCode=0 Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.551390 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerDied","Data":"3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26"} Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.551413 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.551419 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b447da-3a96-4fb6-a7e7-13c32dbb793c","Type":"ContainerDied","Data":"846b7d02ee5a112c02e7e410c5881d4a22164d29093b3bd2cd6741a322f8b43b"} Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.551487 4772 scope.go:117] "RemoveContainer" containerID="b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.579904 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.587617 4772 scope.go:117] "RemoveContainer" containerID="15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.589144 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.610560 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.611368 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759669a7-3328-4d4f-b4bb-661824149475" containerName="init" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.611486 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="759669a7-3328-4d4f-b4bb-661824149475" containerName="init" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.611574 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="proxy-httpd" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.611645 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="proxy-httpd" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.611744 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-central-agent" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.611814 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-central-agent" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.611889 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759669a7-3328-4d4f-b4bb-661824149475" containerName="dnsmasq-dns" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.611971 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="759669a7-3328-4d4f-b4bb-661824149475" containerName="dnsmasq-dns" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.612079 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-notification-agent" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.612155 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-notification-agent" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.612260 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="sg-core" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.612329 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="sg-core" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.612697 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="759669a7-3328-4d4f-b4bb-661824149475" containerName="dnsmasq-dns" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.612828 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="proxy-httpd" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.613129 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-central-agent" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.616458 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="sg-core" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.616567 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" containerName="ceilometer-notification-agent" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.620641 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.623381 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.625942 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.626880 4772 scope.go:117] "RemoveContainer" containerID="3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.640369 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.701315 4772 scope.go:117] "RemoveContainer" containerID="4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.717959 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.718315 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.718477 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-scripts\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.718542 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc535b2-2f64-4971-b836-12565953a22f-run-httpd\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.718649 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-config-data\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.719696 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc535b2-2f64-4971-b836-12565953a22f-log-httpd\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.720071 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtl88\" (UniqueName: \"kubernetes.io/projected/2fc535b2-2f64-4971-b836-12565953a22f-kube-api-access-dtl88\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.759029 4772 scope.go:117] "RemoveContainer" containerID="b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.759543 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab\": container with ID starting with b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab not found: ID does not exist" containerID="b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.759586 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab"} err="failed to get container status \"b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab\": rpc error: code = NotFound desc = could not find container \"b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab\": container with ID starting with b5dfc1a2fb1e6b2d9115753d45ed22fd21439d02f0a4438be6cce74033eb23ab not found: ID does not exist" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.759613 4772 scope.go:117] "RemoveContainer" containerID="15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.760103 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d\": container with ID starting with 15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d not found: ID does not exist" containerID="15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.760131 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d"} err="failed to get container status \"15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d\": rpc error: code = NotFound desc = could not find container \"15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d\": container with ID starting with 15ea414b6053bb97fae6a1abd4c900fa48dfac99b44c59844f75b8093d821f0d not found: ID does not exist" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.760146 4772 scope.go:117] "RemoveContainer" containerID="3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.760463 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26\": container with ID starting with 3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26 not found: ID does not exist" containerID="3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.760489 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26"} err="failed to get container status \"3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26\": rpc error: code = NotFound desc = could not find container \"3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26\": container with ID starting with 3d9ebbd3ff5b88c4be8c3b0661bc35b1062f1adc7237698fa1195deebec86b26 not found: ID does not exist" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.760502 4772 scope.go:117] "RemoveContainer" containerID="4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982" Nov 22 12:24:13 crc kubenswrapper[4772]: E1122 12:24:13.760758 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982\": container with ID starting with 4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982 not found: ID does not exist" containerID="4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.760808 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982"} err="failed to get container status \"4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982\": rpc error: code = NotFound desc = could not find container \"4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982\": container with ID starting with 4e2cec3d6fcd133eeac3c5d6baf3590752fd530fe48017fe46e44f7fc46c2982 not found: ID does not exist" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.823028 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.823168 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.823214 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-scripts\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.823240 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc535b2-2f64-4971-b836-12565953a22f-run-httpd\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.823278 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-config-data\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.823368 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc535b2-2f64-4971-b836-12565953a22f-log-httpd\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.823433 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtl88\" (UniqueName: \"kubernetes.io/projected/2fc535b2-2f64-4971-b836-12565953a22f-kube-api-access-dtl88\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.824134 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc535b2-2f64-4971-b836-12565953a22f-run-httpd\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.824439 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc535b2-2f64-4971-b836-12565953a22f-log-httpd\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.829626 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.831332 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.831633 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-config-data\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.842014 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc535b2-2f64-4971-b836-12565953a22f-scripts\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:13 crc kubenswrapper[4772]: I1122 12:24:13.853734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtl88\" (UniqueName: \"kubernetes.io/projected/2fc535b2-2f64-4971-b836-12565953a22f-kube-api-access-dtl88\") pod \"ceilometer-0\" (UID: \"2fc535b2-2f64-4971-b836-12565953a22f\") " pod="openstack/ceilometer-0" Nov 22 12:24:14 crc kubenswrapper[4772]: I1122 12:24:14.009261 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 12:24:14 crc kubenswrapper[4772]: I1122 12:24:14.506409 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 12:24:14 crc kubenswrapper[4772]: W1122 12:24:14.508834 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc535b2_2f64_4971_b836_12565953a22f.slice/crio-4910676ca53cc568930b661b1b13265555a786cc987358a67016654d9828bb76 WatchSource:0}: Error finding container 4910676ca53cc568930b661b1b13265555a786cc987358a67016654d9828bb76: Status 404 returned error can't find the container with id 4910676ca53cc568930b661b1b13265555a786cc987358a67016654d9828bb76 Nov 22 12:24:14 crc kubenswrapper[4772]: I1122 12:24:14.563775 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc535b2-2f64-4971-b836-12565953a22f","Type":"ContainerStarted","Data":"4910676ca53cc568930b661b1b13265555a786cc987358a67016654d9828bb76"} Nov 22 12:24:15 crc kubenswrapper[4772]: I1122 12:24:15.017363 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 22 12:24:15 crc kubenswrapper[4772]: I1122 12:24:15.430674 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b447da-3a96-4fb6-a7e7-13c32dbb793c" path="/var/lib/kubelet/pods/e3b447da-3a96-4fb6-a7e7-13c32dbb793c/volumes" Nov 22 12:24:15 crc kubenswrapper[4772]: I1122 12:24:15.576259 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc535b2-2f64-4971-b836-12565953a22f","Type":"ContainerStarted","Data":"67d6f12b2262dd76886924e810427ce8e6f5830c2e883b5afb86fcd62e3803d0"} Nov 22 12:24:16 crc kubenswrapper[4772]: I1122 12:24:16.587145 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc535b2-2f64-4971-b836-12565953a22f","Type":"ContainerStarted","Data":"9fa20547b395a44573d5aba0a049f4efe5633bf44c2bbf98a96b1ea37ce77694"} Nov 22 12:24:16 crc kubenswrapper[4772]: I1122 12:24:16.719777 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 22 12:24:16 crc kubenswrapper[4772]: I1122 12:24:16.852519 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 22 12:24:16 crc kubenswrapper[4772]: I1122 12:24:16.986396 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 22 12:24:17 crc kubenswrapper[4772]: I1122 12:24:17.599354 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc535b2-2f64-4971-b836-12565953a22f","Type":"ContainerStarted","Data":"6e1b3e1e5cbd458ab916b10867b72cd912e867c8d1f072edb0bb1a05d2088c46"} Nov 22 12:24:19 crc kubenswrapper[4772]: I1122 12:24:19.621669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc535b2-2f64-4971-b836-12565953a22f","Type":"ContainerStarted","Data":"a2ca3b975e66768212a2f70201c56e3f6c4361843c17d2a7d95e9233359205a5"} Nov 22 12:24:19 crc kubenswrapper[4772]: I1122 12:24:19.624777 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 12:24:19 crc kubenswrapper[4772]: I1122 12:24:19.657790 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.747761405 podStartE2EDuration="6.657750225s" podCreationTimestamp="2025-11-22 12:24:13 +0000 UTC" firstStartedPulling="2025-11-22 12:24:14.512970087 +0000 UTC m=+6374.752414581" lastFinishedPulling="2025-11-22 12:24:18.422958907 +0000 UTC m=+6378.662403401" observedRunningTime="2025-11-22 12:24:19.647695787 +0000 UTC m=+6379.887140281" watchObservedRunningTime="2025-11-22 12:24:19.657750225 +0000 UTC m=+6379.897194719" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.013974 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-45jk8"] Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.019360 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.029482 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-45jk8"] Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.165858 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-utilities\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.166200 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5ph\" (UniqueName: \"kubernetes.io/projected/a7da9605-b12a-49e2-b34f-b52df0a8e81c-kube-api-access-5b5ph\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.166556 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-catalog-content\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.268466 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b5ph\" (UniqueName: \"kubernetes.io/projected/a7da9605-b12a-49e2-b34f-b52df0a8e81c-kube-api-access-5b5ph\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.268637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-catalog-content\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.268737 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-utilities\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.269306 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-utilities\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.269337 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-catalog-content\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.293632 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b5ph\" (UniqueName: \"kubernetes.io/projected/a7da9605-b12a-49e2-b34f-b52df0a8e81c-kube-api-access-5b5ph\") pod \"redhat-marketplace-45jk8\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.348339 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:25 crc kubenswrapper[4772]: I1122 12:24:25.879158 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-45jk8"] Nov 22 12:24:26 crc kubenswrapper[4772]: I1122 12:24:26.699147 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerID="9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480" exitCode=0 Nov 22 12:24:26 crc kubenswrapper[4772]: I1122 12:24:26.699600 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45jk8" event={"ID":"a7da9605-b12a-49e2-b34f-b52df0a8e81c","Type":"ContainerDied","Data":"9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480"} Nov 22 12:24:26 crc kubenswrapper[4772]: I1122 12:24:26.699632 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45jk8" event={"ID":"a7da9605-b12a-49e2-b34f-b52df0a8e81c","Type":"ContainerStarted","Data":"d89129ffc1875cc276dc2c751d03a7f6a5745b8791a091f2626d21519d845ac9"} Nov 22 12:24:27 crc kubenswrapper[4772]: I1122 12:24:27.719743 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45jk8" event={"ID":"a7da9605-b12a-49e2-b34f-b52df0a8e81c","Type":"ContainerStarted","Data":"61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282"} Nov 22 12:24:29 crc kubenswrapper[4772]: I1122 12:24:29.743978 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerID="61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282" exitCode=0 Nov 22 12:24:29 crc kubenswrapper[4772]: I1122 12:24:29.744095 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45jk8" event={"ID":"a7da9605-b12a-49e2-b34f-b52df0a8e81c","Type":"ContainerDied","Data":"61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282"} Nov 22 12:24:30 crc kubenswrapper[4772]: I1122 12:24:30.765843 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45jk8" event={"ID":"a7da9605-b12a-49e2-b34f-b52df0a8e81c","Type":"ContainerStarted","Data":"1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619"} Nov 22 12:24:30 crc kubenswrapper[4772]: I1122 12:24:30.809765 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-45jk8" podStartSLOduration=3.344825514 podStartE2EDuration="6.809734108s" podCreationTimestamp="2025-11-22 12:24:24 +0000 UTC" firstStartedPulling="2025-11-22 12:24:26.702735176 +0000 UTC m=+6386.942179670" lastFinishedPulling="2025-11-22 12:24:30.16764378 +0000 UTC m=+6390.407088264" observedRunningTime="2025-11-22 12:24:30.793032216 +0000 UTC m=+6391.032476720" watchObservedRunningTime="2025-11-22 12:24:30.809734108 +0000 UTC m=+6391.049178632" Nov 22 12:24:35 crc kubenswrapper[4772]: I1122 12:24:35.349719 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:35 crc kubenswrapper[4772]: I1122 12:24:35.350223 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:35 crc kubenswrapper[4772]: I1122 12:24:35.403410 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:35 crc kubenswrapper[4772]: I1122 12:24:35.903127 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:35 crc kubenswrapper[4772]: I1122 12:24:35.967255 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-45jk8"] Nov 22 12:24:37 crc kubenswrapper[4772]: I1122 12:24:37.849081 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-45jk8" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="registry-server" containerID="cri-o://1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619" gracePeriod=2 Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.420921 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.537710 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-utilities\") pod \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.537974 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b5ph\" (UniqueName: \"kubernetes.io/projected/a7da9605-b12a-49e2-b34f-b52df0a8e81c-kube-api-access-5b5ph\") pod \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.538228 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-catalog-content\") pod \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\" (UID: \"a7da9605-b12a-49e2-b34f-b52df0a8e81c\") " Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.538612 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-utilities" (OuterVolumeSpecName: "utilities") pod "a7da9605-b12a-49e2-b34f-b52df0a8e81c" (UID: "a7da9605-b12a-49e2-b34f-b52df0a8e81c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.540229 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.571533 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7da9605-b12a-49e2-b34f-b52df0a8e81c" (UID: "a7da9605-b12a-49e2-b34f-b52df0a8e81c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.642624 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7da9605-b12a-49e2-b34f-b52df0a8e81c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.772747 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7da9605-b12a-49e2-b34f-b52df0a8e81c-kube-api-access-5b5ph" (OuterVolumeSpecName: "kube-api-access-5b5ph") pod "a7da9605-b12a-49e2-b34f-b52df0a8e81c" (UID: "a7da9605-b12a-49e2-b34f-b52df0a8e81c"). InnerVolumeSpecName "kube-api-access-5b5ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.868585 4772 generic.go:334] "Generic (PLEG): container finished" podID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerID="1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619" exitCode=0 Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.868696 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45jk8" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.868673 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45jk8" event={"ID":"a7da9605-b12a-49e2-b34f-b52df0a8e81c","Type":"ContainerDied","Data":"1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619"} Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.870858 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45jk8" event={"ID":"a7da9605-b12a-49e2-b34f-b52df0a8e81c","Type":"ContainerDied","Data":"d89129ffc1875cc276dc2c751d03a7f6a5745b8791a091f2626d21519d845ac9"} Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.870896 4772 scope.go:117] "RemoveContainer" containerID="1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.873965 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b5ph\" (UniqueName: \"kubernetes.io/projected/a7da9605-b12a-49e2-b34f-b52df0a8e81c-kube-api-access-5b5ph\") on node \"crc\" DevicePath \"\"" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.891397 4772 scope.go:117] "RemoveContainer" containerID="61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.917147 4772 scope.go:117] "RemoveContainer" containerID="9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.921903 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-45jk8"] Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.932655 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-45jk8"] Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.981884 4772 scope.go:117] "RemoveContainer" containerID="1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619" Nov 22 12:24:38 crc kubenswrapper[4772]: E1122 12:24:38.987915 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619\": container with ID starting with 1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619 not found: ID does not exist" containerID="1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.987979 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619"} err="failed to get container status \"1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619\": rpc error: code = NotFound desc = could not find container \"1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619\": container with ID starting with 1c07bd434d6d259099841835f1f795d9874155d4d29680469d66dae3e5707619 not found: ID does not exist" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.988015 4772 scope.go:117] "RemoveContainer" containerID="61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282" Nov 22 12:24:38 crc kubenswrapper[4772]: E1122 12:24:38.991748 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282\": container with ID starting with 61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282 not found: ID does not exist" containerID="61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.991800 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282"} err="failed to get container status \"61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282\": rpc error: code = NotFound desc = could not find container \"61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282\": container with ID starting with 61b3e45a2527955be53894ffc86463702eca94377f907c46794aadab714ca282 not found: ID does not exist" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.991834 4772 scope.go:117] "RemoveContainer" containerID="9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480" Nov 22 12:24:38 crc kubenswrapper[4772]: E1122 12:24:38.992600 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480\": container with ID starting with 9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480 not found: ID does not exist" containerID="9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480" Nov 22 12:24:38 crc kubenswrapper[4772]: I1122 12:24:38.992641 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480"} err="failed to get container status \"9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480\": rpc error: code = NotFound desc = could not find container \"9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480\": container with ID starting with 9d9ab7305e7c07a2bef01db8030c405611df23fa161d1c2f78bb02aad666f480 not found: ID does not exist" Nov 22 12:24:39 crc kubenswrapper[4772]: I1122 12:24:39.437915 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" path="/var/lib/kubelet/pods/a7da9605-b12a-49e2-b34f-b52df0a8e81c/volumes" Nov 22 12:24:44 crc kubenswrapper[4772]: I1122 12:24:44.023915 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.320814 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g49sv"] Nov 22 12:24:54 crc kubenswrapper[4772]: E1122 12:24:54.322980 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="registry-server" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.323015 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="registry-server" Nov 22 12:24:54 crc kubenswrapper[4772]: E1122 12:24:54.323098 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="extract-content" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.323117 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="extract-content" Nov 22 12:24:54 crc kubenswrapper[4772]: E1122 12:24:54.323201 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="extract-utilities" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.323220 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="extract-utilities" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.323760 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7da9605-b12a-49e2-b34f-b52df0a8e81c" containerName="registry-server" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.330162 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.338624 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g49sv"] Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.463473 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r7t2\" (UniqueName: \"kubernetes.io/projected/654435ef-0185-49d1-8aed-ce2af85916fb-kube-api-access-9r7t2\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.463565 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-utilities\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.464029 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-catalog-content\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.566603 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r7t2\" (UniqueName: \"kubernetes.io/projected/654435ef-0185-49d1-8aed-ce2af85916fb-kube-api-access-9r7t2\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.566674 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-utilities\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.566800 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-catalog-content\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.567373 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-utilities\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.567393 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-catalog-content\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.599348 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r7t2\" (UniqueName: \"kubernetes.io/projected/654435ef-0185-49d1-8aed-ce2af85916fb-kube-api-access-9r7t2\") pod \"certified-operators-g49sv\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:54 crc kubenswrapper[4772]: I1122 12:24:54.672680 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:24:55 crc kubenswrapper[4772]: I1122 12:24:55.266835 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g49sv"] Nov 22 12:24:56 crc kubenswrapper[4772]: I1122 12:24:56.126445 4772 generic.go:334] "Generic (PLEG): container finished" podID="654435ef-0185-49d1-8aed-ce2af85916fb" containerID="b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a" exitCode=0 Nov 22 12:24:56 crc kubenswrapper[4772]: I1122 12:24:56.126561 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g49sv" event={"ID":"654435ef-0185-49d1-8aed-ce2af85916fb","Type":"ContainerDied","Data":"b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a"} Nov 22 12:24:56 crc kubenswrapper[4772]: I1122 12:24:56.127069 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g49sv" event={"ID":"654435ef-0185-49d1-8aed-ce2af85916fb","Type":"ContainerStarted","Data":"6321f61122740846af25d3293537fe95feb6a2f67695da98450d1215a99f414b"} Nov 22 12:24:57 crc kubenswrapper[4772]: I1122 12:24:57.152928 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g49sv" event={"ID":"654435ef-0185-49d1-8aed-ce2af85916fb","Type":"ContainerStarted","Data":"d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95"} Nov 22 12:25:00 crc kubenswrapper[4772]: I1122 12:25:00.192567 4772 generic.go:334] "Generic (PLEG): container finished" podID="654435ef-0185-49d1-8aed-ce2af85916fb" containerID="d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95" exitCode=0 Nov 22 12:25:00 crc kubenswrapper[4772]: I1122 12:25:00.192609 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g49sv" event={"ID":"654435ef-0185-49d1-8aed-ce2af85916fb","Type":"ContainerDied","Data":"d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95"} Nov 22 12:25:01 crc kubenswrapper[4772]: I1122 12:25:01.206684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g49sv" event={"ID":"654435ef-0185-49d1-8aed-ce2af85916fb","Type":"ContainerStarted","Data":"03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85"} Nov 22 12:25:01 crc kubenswrapper[4772]: I1122 12:25:01.223728 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g49sv" podStartSLOduration=2.663110796 podStartE2EDuration="7.223711275s" podCreationTimestamp="2025-11-22 12:24:54 +0000 UTC" firstStartedPulling="2025-11-22 12:24:56.129363642 +0000 UTC m=+6416.368808136" lastFinishedPulling="2025-11-22 12:25:00.689964081 +0000 UTC m=+6420.929408615" observedRunningTime="2025-11-22 12:25:01.222775842 +0000 UTC m=+6421.462220336" watchObservedRunningTime="2025-11-22 12:25:01.223711275 +0000 UTC m=+6421.463155769" Nov 22 12:25:04 crc kubenswrapper[4772]: I1122 12:25:04.674210 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:25:04 crc kubenswrapper[4772]: I1122 12:25:04.675724 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:25:04 crc kubenswrapper[4772]: I1122 12:25:04.893431 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bfcfffbc5-t787m"] Nov 22 12:25:04 crc kubenswrapper[4772]: I1122 12:25:04.897382 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:04 crc kubenswrapper[4772]: I1122 12:25:04.902153 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 22 12:25:04 crc kubenswrapper[4772]: I1122 12:25:04.905649 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcfffbc5-t787m"] Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.025543 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-dns-svc\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.025651 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-openstack-cell1\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.025688 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.025714 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.025768 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7dm9\" (UniqueName: \"kubernetes.io/projected/26d5c78f-5f76-4b87-8018-033b3f4f630f-kube-api-access-l7dm9\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.025822 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-config\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.127867 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-dns-svc\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.128832 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-dns-svc\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.129002 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-openstack-cell1\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.129041 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.129082 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.129847 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.129505 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7dm9\" (UniqueName: \"kubernetes.io/projected/26d5c78f-5f76-4b87-8018-033b3f4f630f-kube-api-access-l7dm9\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.129981 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-config\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.130319 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-openstack-cell1\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.130447 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.130695 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-config\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.153734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7dm9\" (UniqueName: \"kubernetes.io/projected/26d5c78f-5f76-4b87-8018-033b3f4f630f-kube-api-access-l7dm9\") pod \"dnsmasq-dns-7bfcfffbc5-t787m\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.228662 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.756128 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g49sv" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="registry-server" probeResult="failure" output=< Nov 22 12:25:05 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 12:25:05 crc kubenswrapper[4772]: > Nov 22 12:25:05 crc kubenswrapper[4772]: I1122 12:25:05.960332 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcfffbc5-t787m"] Nov 22 12:25:06 crc kubenswrapper[4772]: I1122 12:25:06.299203 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" event={"ID":"26d5c78f-5f76-4b87-8018-033b3f4f630f","Type":"ContainerStarted","Data":"112058255ff1646b5928803c5bfc309fa05683fa708a028cea28b4f1389d2062"} Nov 22 12:25:07 crc kubenswrapper[4772]: I1122 12:25:07.316514 4772 generic.go:334] "Generic (PLEG): container finished" podID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerID="2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8" exitCode=0 Nov 22 12:25:07 crc kubenswrapper[4772]: I1122 12:25:07.316663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" event={"ID":"26d5c78f-5f76-4b87-8018-033b3f4f630f","Type":"ContainerDied","Data":"2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8"} Nov 22 12:25:08 crc kubenswrapper[4772]: I1122 12:25:08.347673 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" event={"ID":"26d5c78f-5f76-4b87-8018-033b3f4f630f","Type":"ContainerStarted","Data":"5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8"} Nov 22 12:25:08 crc kubenswrapper[4772]: I1122 12:25:08.350774 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:08 crc kubenswrapper[4772]: I1122 12:25:08.390961 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" podStartSLOduration=4.390930183 podStartE2EDuration="4.390930183s" podCreationTimestamp="2025-11-22 12:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:25:08.382818953 +0000 UTC m=+6428.622263487" watchObservedRunningTime="2025-11-22 12:25:08.390930183 +0000 UTC m=+6428.630374687" Nov 22 12:25:14 crc kubenswrapper[4772]: I1122 12:25:14.749899 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:25:14 crc kubenswrapper[4772]: I1122 12:25:14.818708 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:25:14 crc kubenswrapper[4772]: I1122 12:25:14.999532 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g49sv"] Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.231345 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.314023 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf684ddc5-wksps"] Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.314681 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" podUID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerName="dnsmasq-dns" containerID="cri-o://ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc" gracePeriod=10 Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.559735 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57644db565-l5cdf"] Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.567025 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.579458 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57644db565-l5cdf"] Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.694261 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-openstack-cell1\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.694342 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67z4z\" (UniqueName: \"kubernetes.io/projected/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-kube-api-access-67z4z\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.694365 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-ovsdbserver-sb\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.694401 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-ovsdbserver-nb\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.694456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-dns-svc\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.694513 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-config\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.803097 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-dns-svc\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.803626 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-config\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.803754 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-openstack-cell1\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.803803 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67z4z\" (UniqueName: \"kubernetes.io/projected/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-kube-api-access-67z4z\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.803821 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-ovsdbserver-sb\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.803861 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-ovsdbserver-nb\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.804245 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-dns-svc\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.805196 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-openstack-cell1\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.805241 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-ovsdbserver-nb\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.805730 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-ovsdbserver-sb\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.805896 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-config\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.830513 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67z4z\" (UniqueName: \"kubernetes.io/projected/ff5945d2-041b-499f-8d6f-74dcc4d34fdb-kube-api-access-67z4z\") pod \"dnsmasq-dns-57644db565-l5cdf\" (UID: \"ff5945d2-041b-499f-8d6f-74dcc4d34fdb\") " pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:15 crc kubenswrapper[4772]: I1122 12:25:15.914755 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.045895 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.216946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-nb\") pod \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.217353 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-dns-svc\") pod \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.217376 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mvf2\" (UniqueName: \"kubernetes.io/projected/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-kube-api-access-5mvf2\") pod \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.217417 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-config\") pod \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.217510 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-sb\") pod \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\" (UID: \"aa5165a2-1876-4f1e-bfbe-91198cc15ac5\") " Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.224562 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-kube-api-access-5mvf2" (OuterVolumeSpecName: "kube-api-access-5mvf2") pod "aa5165a2-1876-4f1e-bfbe-91198cc15ac5" (UID: "aa5165a2-1876-4f1e-bfbe-91198cc15ac5"). InnerVolumeSpecName "kube-api-access-5mvf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.283652 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa5165a2-1876-4f1e-bfbe-91198cc15ac5" (UID: "aa5165a2-1876-4f1e-bfbe-91198cc15ac5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.320868 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mvf2\" (UniqueName: \"kubernetes.io/projected/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-kube-api-access-5mvf2\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.320899 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.326732 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-config" (OuterVolumeSpecName: "config") pod "aa5165a2-1876-4f1e-bfbe-91198cc15ac5" (UID: "aa5165a2-1876-4f1e-bfbe-91198cc15ac5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.333505 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa5165a2-1876-4f1e-bfbe-91198cc15ac5" (UID: "aa5165a2-1876-4f1e-bfbe-91198cc15ac5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.336109 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa5165a2-1876-4f1e-bfbe-91198cc15ac5" (UID: "aa5165a2-1876-4f1e-bfbe-91198cc15ac5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.423495 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.423543 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.423555 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5165a2-1876-4f1e-bfbe-91198cc15ac5-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.467141 4772 generic.go:334] "Generic (PLEG): container finished" podID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerID="ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc" exitCode=0 Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.467394 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.467529 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g49sv" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="registry-server" containerID="cri-o://03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85" gracePeriod=2 Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.467660 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" event={"ID":"aa5165a2-1876-4f1e-bfbe-91198cc15ac5","Type":"ContainerDied","Data":"ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc"} Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.467723 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf684ddc5-wksps" event={"ID":"aa5165a2-1876-4f1e-bfbe-91198cc15ac5","Type":"ContainerDied","Data":"b64556cdce9ee8163baeaf2134557dbb719e0185d1e622924161b318fba0750b"} Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.467754 4772 scope.go:117] "RemoveContainer" containerID="ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.661083 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57644db565-l5cdf"] Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.891180 4772 scope.go:117] "RemoveContainer" containerID="908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.919576 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf684ddc5-wksps"] Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.931986 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf684ddc5-wksps"] Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.941946 4772 scope.go:117] "RemoveContainer" containerID="ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc" Nov 22 12:25:16 crc kubenswrapper[4772]: E1122 12:25:16.947419 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc\": container with ID starting with ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc not found: ID does not exist" containerID="ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.947471 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc"} err="failed to get container status \"ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc\": rpc error: code = NotFound desc = could not find container \"ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc\": container with ID starting with ab131b4b907e209b86358e354cefaddb4361007fbf6a9c45b89ebe5262ce55dc not found: ID does not exist" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.947501 4772 scope.go:117] "RemoveContainer" containerID="908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418" Nov 22 12:25:16 crc kubenswrapper[4772]: E1122 12:25:16.948021 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418\": container with ID starting with 908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418 not found: ID does not exist" containerID="908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418" Nov 22 12:25:16 crc kubenswrapper[4772]: I1122 12:25:16.948039 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418"} err="failed to get container status \"908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418\": rpc error: code = NotFound desc = could not find container \"908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418\": container with ID starting with 908c23612303573bcabd5f4b393a52973b93b9df7200258a43e7b8117b319418 not found: ID does not exist" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.092738 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.112542 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-utilities\") pod \"654435ef-0185-49d1-8aed-ce2af85916fb\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.115156 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r7t2\" (UniqueName: \"kubernetes.io/projected/654435ef-0185-49d1-8aed-ce2af85916fb-kube-api-access-9r7t2\") pod \"654435ef-0185-49d1-8aed-ce2af85916fb\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.115377 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-catalog-content\") pod \"654435ef-0185-49d1-8aed-ce2af85916fb\" (UID: \"654435ef-0185-49d1-8aed-ce2af85916fb\") " Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.115609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-utilities" (OuterVolumeSpecName: "utilities") pod "654435ef-0185-49d1-8aed-ce2af85916fb" (UID: "654435ef-0185-49d1-8aed-ce2af85916fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.116715 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.120189 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654435ef-0185-49d1-8aed-ce2af85916fb-kube-api-access-9r7t2" (OuterVolumeSpecName: "kube-api-access-9r7t2") pod "654435ef-0185-49d1-8aed-ce2af85916fb" (UID: "654435ef-0185-49d1-8aed-ce2af85916fb"). InnerVolumeSpecName "kube-api-access-9r7t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.212388 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "654435ef-0185-49d1-8aed-ce2af85916fb" (UID: "654435ef-0185-49d1-8aed-ce2af85916fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.218275 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r7t2\" (UniqueName: \"kubernetes.io/projected/654435ef-0185-49d1-8aed-ce2af85916fb-kube-api-access-9r7t2\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.218310 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654435ef-0185-49d1-8aed-ce2af85916fb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:17 crc kubenswrapper[4772]: E1122 12:25:17.337262 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff5945d2_041b_499f_8d6f_74dcc4d34fdb.slice/crio-580b1dc98271e6704749dec961f221f977690ebc0f26b5f8cabc0dc07a1a965a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff5945d2_041b_499f_8d6f_74dcc4d34fdb.slice/crio-conmon-580b1dc98271e6704749dec961f221f977690ebc0f26b5f8cabc0dc07a1a965a.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.427920 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" path="/var/lib/kubelet/pods/aa5165a2-1876-4f1e-bfbe-91198cc15ac5/volumes" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.506265 4772 generic.go:334] "Generic (PLEG): container finished" podID="ff5945d2-041b-499f-8d6f-74dcc4d34fdb" containerID="580b1dc98271e6704749dec961f221f977690ebc0f26b5f8cabc0dc07a1a965a" exitCode=0 Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.506375 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57644db565-l5cdf" event={"ID":"ff5945d2-041b-499f-8d6f-74dcc4d34fdb","Type":"ContainerDied","Data":"580b1dc98271e6704749dec961f221f977690ebc0f26b5f8cabc0dc07a1a965a"} Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.506425 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57644db565-l5cdf" event={"ID":"ff5945d2-041b-499f-8d6f-74dcc4d34fdb","Type":"ContainerStarted","Data":"f7823aa8668fc2bcaeb27c8695f6b02ab3c5e50febd631060365437757c35e60"} Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.532316 4772 generic.go:334] "Generic (PLEG): container finished" podID="654435ef-0185-49d1-8aed-ce2af85916fb" containerID="03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85" exitCode=0 Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.532373 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g49sv" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.532485 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g49sv" event={"ID":"654435ef-0185-49d1-8aed-ce2af85916fb","Type":"ContainerDied","Data":"03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85"} Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.532522 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g49sv" event={"ID":"654435ef-0185-49d1-8aed-ce2af85916fb","Type":"ContainerDied","Data":"6321f61122740846af25d3293537fe95feb6a2f67695da98450d1215a99f414b"} Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.532668 4772 scope.go:117] "RemoveContainer" containerID="03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.742272 4772 scope.go:117] "RemoveContainer" containerID="d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.754800 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g49sv"] Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.765907 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g49sv"] Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.777082 4772 scope.go:117] "RemoveContainer" containerID="b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.840262 4772 scope.go:117] "RemoveContainer" containerID="03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85" Nov 22 12:25:17 crc kubenswrapper[4772]: E1122 12:25:17.841677 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85\": container with ID starting with 03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85 not found: ID does not exist" containerID="03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.841729 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85"} err="failed to get container status \"03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85\": rpc error: code = NotFound desc = could not find container \"03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85\": container with ID starting with 03a7bb49b229771dc270e5fd6d6a583db3dc9805b549789329fd5e7a80fc0f85 not found: ID does not exist" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.841760 4772 scope.go:117] "RemoveContainer" containerID="d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95" Nov 22 12:25:17 crc kubenswrapper[4772]: E1122 12:25:17.842787 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95\": container with ID starting with d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95 not found: ID does not exist" containerID="d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.842857 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95"} err="failed to get container status \"d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95\": rpc error: code = NotFound desc = could not find container \"d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95\": container with ID starting with d5d500d314cb4e28b06b0dbda540b1c557062ba85299c92bace1e8dc192cad95 not found: ID does not exist" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.842904 4772 scope.go:117] "RemoveContainer" containerID="b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a" Nov 22 12:25:17 crc kubenswrapper[4772]: E1122 12:25:17.843448 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a\": container with ID starting with b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a not found: ID does not exist" containerID="b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a" Nov 22 12:25:17 crc kubenswrapper[4772]: I1122 12:25:17.843524 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a"} err="failed to get container status \"b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a\": rpc error: code = NotFound desc = could not find container \"b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a\": container with ID starting with b5190f03c9c7bd136df96b461d6b009c5ee69c56d867dc0ead1ca862d55e334a not found: ID does not exist" Nov 22 12:25:18 crc kubenswrapper[4772]: I1122 12:25:18.550064 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57644db565-l5cdf" event={"ID":"ff5945d2-041b-499f-8d6f-74dcc4d34fdb","Type":"ContainerStarted","Data":"7bd57ceb58e442df8f3d83c8b2668995bf2534ac23298d752cfba52723727732"} Nov 22 12:25:18 crc kubenswrapper[4772]: I1122 12:25:18.550382 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:18 crc kubenswrapper[4772]: I1122 12:25:18.591177 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57644db565-l5cdf" podStartSLOduration=3.591141984 podStartE2EDuration="3.591141984s" podCreationTimestamp="2025-11-22 12:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:25:18.575518668 +0000 UTC m=+6438.814963162" watchObservedRunningTime="2025-11-22 12:25:18.591141984 +0000 UTC m=+6438.830586478" Nov 22 12:25:19 crc kubenswrapper[4772]: I1122 12:25:19.438669 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" path="/var/lib/kubelet/pods/654435ef-0185-49d1-8aed-ce2af85916fb/volumes" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.442182 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p"] Nov 22 12:25:22 crc kubenswrapper[4772]: E1122 12:25:22.445613 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerName="init" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.445653 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerName="init" Nov 22 12:25:22 crc kubenswrapper[4772]: E1122 12:25:22.445726 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="extract-content" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.445746 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="extract-content" Nov 22 12:25:22 crc kubenswrapper[4772]: E1122 12:25:22.445793 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="extract-utilities" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.445812 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="extract-utilities" Nov 22 12:25:22 crc kubenswrapper[4772]: E1122 12:25:22.445885 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerName="dnsmasq-dns" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.445901 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerName="dnsmasq-dns" Nov 22 12:25:22 crc kubenswrapper[4772]: E1122 12:25:22.445987 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="registry-server" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.446003 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="registry-server" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.447312 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="654435ef-0185-49d1-8aed-ce2af85916fb" containerName="registry-server" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.447349 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5165a2-1876-4f1e-bfbe-91198cc15ac5" containerName="dnsmasq-dns" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.463144 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.468259 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.469000 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.478552 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.478967 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.502886 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p"] Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.592559 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.592918 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.592968 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.593073 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.593173 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ctd\" (UniqueName: \"kubernetes.io/projected/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-kube-api-access-x7ctd\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.695409 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.695474 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.695519 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.695590 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ctd\" (UniqueName: \"kubernetes.io/projected/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-kube-api-access-x7ctd\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.695725 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.703132 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.703302 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.704781 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.704926 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.714255 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ctd\" (UniqueName: \"kubernetes.io/projected/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-kube-api-access-x7ctd\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:22 crc kubenswrapper[4772]: I1122 12:25:22.806271 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:23 crc kubenswrapper[4772]: W1122 12:25:23.396837 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5d839d3_cc8e_4c44_9739_f9f76e9ed1c9.slice/crio-4efe2697ead13be0d5220f8544dfac7f33362e8476d11decb28530ab49f5aad2 WatchSource:0}: Error finding container 4efe2697ead13be0d5220f8544dfac7f33362e8476d11decb28530ab49f5aad2: Status 404 returned error can't find the container with id 4efe2697ead13be0d5220f8544dfac7f33362e8476d11decb28530ab49f5aad2 Nov 22 12:25:23 crc kubenswrapper[4772]: I1122 12:25:23.397190 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p"] Nov 22 12:25:23 crc kubenswrapper[4772]: I1122 12:25:23.613855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" event={"ID":"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9","Type":"ContainerStarted","Data":"4efe2697ead13be0d5220f8544dfac7f33362e8476d11decb28530ab49f5aad2"} Nov 22 12:25:25 crc kubenswrapper[4772]: I1122 12:25:25.917186 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57644db565-l5cdf" Nov 22 12:25:25 crc kubenswrapper[4772]: I1122 12:25:25.985849 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcfffbc5-t787m"] Nov 22 12:25:25 crc kubenswrapper[4772]: I1122 12:25:25.986796 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" podUID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerName="dnsmasq-dns" containerID="cri-o://5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8" gracePeriod=10 Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.530368 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.593992 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7dm9\" (UniqueName: \"kubernetes.io/projected/26d5c78f-5f76-4b87-8018-033b3f4f630f-kube-api-access-l7dm9\") pod \"26d5c78f-5f76-4b87-8018-033b3f4f630f\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.594171 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-dns-svc\") pod \"26d5c78f-5f76-4b87-8018-033b3f4f630f\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.594472 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-openstack-cell1\") pod \"26d5c78f-5f76-4b87-8018-033b3f4f630f\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.594580 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-sb\") pod \"26d5c78f-5f76-4b87-8018-033b3f4f630f\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.594704 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-nb\") pod \"26d5c78f-5f76-4b87-8018-033b3f4f630f\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.594761 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-config\") pod \"26d5c78f-5f76-4b87-8018-033b3f4f630f\" (UID: \"26d5c78f-5f76-4b87-8018-033b3f4f630f\") " Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.602464 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d5c78f-5f76-4b87-8018-033b3f4f630f-kube-api-access-l7dm9" (OuterVolumeSpecName: "kube-api-access-l7dm9") pod "26d5c78f-5f76-4b87-8018-033b3f4f630f" (UID: "26d5c78f-5f76-4b87-8018-033b3f4f630f"). InnerVolumeSpecName "kube-api-access-l7dm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.651837 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "26d5c78f-5f76-4b87-8018-033b3f4f630f" (UID: "26d5c78f-5f76-4b87-8018-033b3f4f630f"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.652134 4772 generic.go:334] "Generic (PLEG): container finished" podID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerID="5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8" exitCode=0 Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.652184 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" event={"ID":"26d5c78f-5f76-4b87-8018-033b3f4f630f","Type":"ContainerDied","Data":"5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8"} Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.652219 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" event={"ID":"26d5c78f-5f76-4b87-8018-033b3f4f630f","Type":"ContainerDied","Data":"112058255ff1646b5928803c5bfc309fa05683fa708a028cea28b4f1389d2062"} Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.652240 4772 scope.go:117] "RemoveContainer" containerID="5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.652319 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcfffbc5-t787m" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.668124 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-config" (OuterVolumeSpecName: "config") pod "26d5c78f-5f76-4b87-8018-033b3f4f630f" (UID: "26d5c78f-5f76-4b87-8018-033b3f4f630f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.677278 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26d5c78f-5f76-4b87-8018-033b3f4f630f" (UID: "26d5c78f-5f76-4b87-8018-033b3f4f630f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.679281 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26d5c78f-5f76-4b87-8018-033b3f4f630f" (UID: "26d5c78f-5f76-4b87-8018-033b3f4f630f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.698899 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.698943 4772 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-config\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.698957 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7dm9\" (UniqueName: \"kubernetes.io/projected/26d5c78f-5f76-4b87-8018-033b3f4f630f-kube-api-access-l7dm9\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.698972 4772 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.698980 4772 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.702063 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26d5c78f-5f76-4b87-8018-033b3f4f630f" (UID: "26d5c78f-5f76-4b87-8018-033b3f4f630f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.733399 4772 scope.go:117] "RemoveContainer" containerID="2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.760520 4772 scope.go:117] "RemoveContainer" containerID="5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8" Nov 22 12:25:26 crc kubenswrapper[4772]: E1122 12:25:26.761154 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8\": container with ID starting with 5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8 not found: ID does not exist" containerID="5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.761196 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8"} err="failed to get container status \"5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8\": rpc error: code = NotFound desc = could not find container \"5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8\": container with ID starting with 5a8854feb57e1473ea281132aaabfafe5cb12db595a0ea0a4a4b352c0ef7e2c8 not found: ID does not exist" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.761240 4772 scope.go:117] "RemoveContainer" containerID="2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8" Nov 22 12:25:26 crc kubenswrapper[4772]: E1122 12:25:26.761663 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8\": container with ID starting with 2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8 not found: ID does not exist" containerID="2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.761711 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8"} err="failed to get container status \"2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8\": rpc error: code = NotFound desc = could not find container \"2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8\": container with ID starting with 2fa5f230361934174d16183ace46d29f6b24b75d671dcbf4a6e96caf3bed87d8 not found: ID does not exist" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.800013 4772 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d5c78f-5f76-4b87-8018-033b3f4f630f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:26 crc kubenswrapper[4772]: I1122 12:25:26.993411 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcfffbc5-t787m"] Nov 22 12:25:27 crc kubenswrapper[4772]: I1122 12:25:27.003312 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bfcfffbc5-t787m"] Nov 22 12:25:27 crc kubenswrapper[4772]: I1122 12:25:27.436206 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d5c78f-5f76-4b87-8018-033b3f4f630f" path="/var/lib/kubelet/pods/26d5c78f-5f76-4b87-8018-033b3f4f630f/volumes" Nov 22 12:25:31 crc kubenswrapper[4772]: I1122 12:25:31.532717 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:25:31 crc kubenswrapper[4772]: I1122 12:25:31.533663 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:25:35 crc kubenswrapper[4772]: I1122 12:25:35.777805 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" event={"ID":"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9","Type":"ContainerStarted","Data":"3ee50b9f8213d20f5bc200f8e1ea2184846a0ada7dfd3681b1daf6e68e7e76e7"} Nov 22 12:25:35 crc kubenswrapper[4772]: I1122 12:25:35.812236 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" podStartSLOduration=1.758659474 podStartE2EDuration="13.81220178s" podCreationTimestamp="2025-11-22 12:25:22 +0000 UTC" firstStartedPulling="2025-11-22 12:25:23.399867397 +0000 UTC m=+6443.639311901" lastFinishedPulling="2025-11-22 12:25:35.453409703 +0000 UTC m=+6455.692854207" observedRunningTime="2025-11-22 12:25:35.800269745 +0000 UTC m=+6456.039714249" watchObservedRunningTime="2025-11-22 12:25:35.81220178 +0000 UTC m=+6456.051646274" Nov 22 12:25:48 crc kubenswrapper[4772]: I1122 12:25:48.936117 4772 generic.go:334] "Generic (PLEG): container finished" podID="d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" containerID="3ee50b9f8213d20f5bc200f8e1ea2184846a0ada7dfd3681b1daf6e68e7e76e7" exitCode=0 Nov 22 12:25:48 crc kubenswrapper[4772]: I1122 12:25:48.936263 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" event={"ID":"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9","Type":"ContainerDied","Data":"3ee50b9f8213d20f5bc200f8e1ea2184846a0ada7dfd3681b1daf6e68e7e76e7"} Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.459881 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.573990 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7ctd\" (UniqueName: \"kubernetes.io/projected/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-kube-api-access-x7ctd\") pod \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.574146 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ssh-key\") pod \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.574179 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-inventory\") pod \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.574253 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ceph\") pod \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.574395 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-pre-adoption-validation-combined-ca-bundle\") pod \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\" (UID: \"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9\") " Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.580583 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" (UID: "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.580886 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-kube-api-access-x7ctd" (OuterVolumeSpecName: "kube-api-access-x7ctd") pod "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" (UID: "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9"). InnerVolumeSpecName "kube-api-access-x7ctd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.581341 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ceph" (OuterVolumeSpecName: "ceph") pod "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" (UID: "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.631257 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" (UID: "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.633229 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-inventory" (OuterVolumeSpecName: "inventory") pod "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" (UID: "d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.677244 4772 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.677287 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7ctd\" (UniqueName: \"kubernetes.io/projected/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-kube-api-access-x7ctd\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.677298 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.677306 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.677317 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.958757 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" event={"ID":"d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9","Type":"ContainerDied","Data":"4efe2697ead13be0d5220f8544dfac7f33362e8476d11decb28530ab49f5aad2"} Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.958797 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4efe2697ead13be0d5220f8544dfac7f33362e8476d11decb28530ab49f5aad2" Nov 22 12:25:50 crc kubenswrapper[4772]: I1122 12:25:50.958835 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p" Nov 22 12:25:53 crc kubenswrapper[4772]: I1122 12:25:53.062867 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-msqn9"] Nov 22 12:25:53 crc kubenswrapper[4772]: I1122 12:25:53.081550 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-msqn9"] Nov 22 12:25:53 crc kubenswrapper[4772]: I1122 12:25:53.433485 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4749fd-6d78-4f2e-ae1b-c892f3eb3e14" path="/var/lib/kubelet/pods/df4749fd-6d78-4f2e-ae1b-c892f3eb3e14/volumes" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.637022 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf"] Nov 22 12:26:00 crc kubenswrapper[4772]: E1122 12:26:00.638833 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerName="init" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.638868 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerName="init" Nov 22 12:26:00 crc kubenswrapper[4772]: E1122 12:26:00.638916 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerName="dnsmasq-dns" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.638934 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerName="dnsmasq-dns" Nov 22 12:26:00 crc kubenswrapper[4772]: E1122 12:26:00.638997 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.639018 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.639583 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.639687 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d5c78f-5f76-4b87-8018-033b3f4f630f" containerName="dnsmasq-dns" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.641522 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.644347 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.645369 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.645742 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.645959 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.656322 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf"] Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.735764 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.735867 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4mc\" (UniqueName: \"kubernetes.io/projected/28d046d2-0d1c-4187-8d76-14d0004ec8e2-kube-api-access-9v4mc\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.735976 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.736005 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.736136 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.842447 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.842598 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.842757 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.842956 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.843606 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4mc\" (UniqueName: \"kubernetes.io/projected/28d046d2-0d1c-4187-8d76-14d0004ec8e2-kube-api-access-9v4mc\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.848556 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.852137 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.856853 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.858716 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.860542 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4mc\" (UniqueName: \"kubernetes.io/projected/28d046d2-0d1c-4187-8d76-14d0004ec8e2-kube-api-access-9v4mc\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:00 crc kubenswrapper[4772]: I1122 12:26:00.975478 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:26:01 crc kubenswrapper[4772]: I1122 12:26:01.533537 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:26:01 crc kubenswrapper[4772]: I1122 12:26:01.534267 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:26:01 crc kubenswrapper[4772]: I1122 12:26:01.614911 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf"] Nov 22 12:26:02 crc kubenswrapper[4772]: I1122 12:26:02.064379 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:26:02 crc kubenswrapper[4772]: I1122 12:26:02.093146 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" event={"ID":"28d046d2-0d1c-4187-8d76-14d0004ec8e2","Type":"ContainerStarted","Data":"aefbac7944a6b5f605b6efffbe36173ce4543b7dec697d09e333fdc34b8c8a15"} Nov 22 12:26:03 crc kubenswrapper[4772]: I1122 12:26:03.106343 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" event={"ID":"28d046d2-0d1c-4187-8d76-14d0004ec8e2","Type":"ContainerStarted","Data":"1f6f9d5308533f45f1a3caf9c1fc4fd621455e685ce60593f38693cfe4b2e770"} Nov 22 12:26:03 crc kubenswrapper[4772]: I1122 12:26:03.124672 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" podStartSLOduration=2.679614805 podStartE2EDuration="3.12464995s" podCreationTimestamp="2025-11-22 12:26:00 +0000 UTC" firstStartedPulling="2025-11-22 12:26:01.616467064 +0000 UTC m=+6481.855911558" lastFinishedPulling="2025-11-22 12:26:02.061502209 +0000 UTC m=+6482.300946703" observedRunningTime="2025-11-22 12:26:03.120098458 +0000 UTC m=+6483.359543032" watchObservedRunningTime="2025-11-22 12:26:03.12464995 +0000 UTC m=+6483.364094454" Nov 22 12:26:05 crc kubenswrapper[4772]: I1122 12:26:05.037440 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-b1c8-account-create-ngvvq"] Nov 22 12:26:05 crc kubenswrapper[4772]: I1122 12:26:05.072797 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-b1c8-account-create-ngvvq"] Nov 22 12:26:05 crc kubenswrapper[4772]: I1122 12:26:05.430479 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d9ec21-53a1-44f7-a74e-604ea29d6f12" path="/var/lib/kubelet/pods/42d9ec21-53a1-44f7-a74e-604ea29d6f12/volumes" Nov 22 12:26:10 crc kubenswrapper[4772]: I1122 12:26:10.051374 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-76ctp"] Nov 22 12:26:10 crc kubenswrapper[4772]: I1122 12:26:10.061129 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-76ctp"] Nov 22 12:26:11 crc kubenswrapper[4772]: I1122 12:26:11.435327 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d983c0-8a49-4d7e-a334-77c163aa2a2c" path="/var/lib/kubelet/pods/66d983c0-8a49-4d7e-a334-77c163aa2a2c/volumes" Nov 22 12:26:16 crc kubenswrapper[4772]: I1122 12:26:16.110855 4772 scope.go:117] "RemoveContainer" containerID="dabcfcaa4b1601352bfe563fc95c72cfbcf6f1ed4c4c17aad63184919de40e69" Nov 22 12:26:16 crc kubenswrapper[4772]: I1122 12:26:16.170173 4772 scope.go:117] "RemoveContainer" containerID="34518a421f0bb3e50aaa011270dfbe6220f4f11edfca82ecf7ef71d7ad7008e0" Nov 22 12:26:16 crc kubenswrapper[4772]: I1122 12:26:16.205205 4772 scope.go:117] "RemoveContainer" containerID="01582daa142b5f09aae31994ef00f53c8cfbf69f6edaa5efffb78d6a56140563" Nov 22 12:26:20 crc kubenswrapper[4772]: I1122 12:26:20.057994 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-a103-account-create-ztwqx"] Nov 22 12:26:20 crc kubenswrapper[4772]: I1122 12:26:20.076382 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-a103-account-create-ztwqx"] Nov 22 12:26:21 crc kubenswrapper[4772]: I1122 12:26:21.427872 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6687721-9ca9-43a9-ac6a-1c0f185ced62" path="/var/lib/kubelet/pods/d6687721-9ca9-43a9-ac6a-1c0f185ced62/volumes" Nov 22 12:26:31 crc kubenswrapper[4772]: I1122 12:26:31.533660 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:26:31 crc kubenswrapper[4772]: I1122 12:26:31.534644 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:26:31 crc kubenswrapper[4772]: I1122 12:26:31.534728 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:26:31 crc kubenswrapper[4772]: I1122 12:26:31.536025 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:26:31 crc kubenswrapper[4772]: I1122 12:26:31.536175 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" gracePeriod=600 Nov 22 12:26:31 crc kubenswrapper[4772]: E1122 12:26:31.674078 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:26:32 crc kubenswrapper[4772]: I1122 12:26:32.428525 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" exitCode=0 Nov 22 12:26:32 crc kubenswrapper[4772]: I1122 12:26:32.428617 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014"} Nov 22 12:26:32 crc kubenswrapper[4772]: I1122 12:26:32.429160 4772 scope.go:117] "RemoveContainer" containerID="4e00768099367f2f555dffd8edb69c3a4098741a37029cc7349a1e6bb1cbc5a1" Nov 22 12:26:32 crc kubenswrapper[4772]: I1122 12:26:32.429983 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:26:32 crc kubenswrapper[4772]: E1122 12:26:32.430309 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:26:45 crc kubenswrapper[4772]: I1122 12:26:45.415159 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:26:45 crc kubenswrapper[4772]: E1122 12:26:45.416567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:27:00 crc kubenswrapper[4772]: I1122 12:27:00.416679 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:27:00 crc kubenswrapper[4772]: E1122 12:27:00.417617 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:27:12 crc kubenswrapper[4772]: I1122 12:27:12.064094 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-rxg9k"] Nov 22 12:27:12 crc kubenswrapper[4772]: I1122 12:27:12.076076 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-rxg9k"] Nov 22 12:27:13 crc kubenswrapper[4772]: I1122 12:27:13.430182 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81eb4121-4766-421d-9d21-8aa5a8801dc8" path="/var/lib/kubelet/pods/81eb4121-4766-421d-9d21-8aa5a8801dc8/volumes" Nov 22 12:27:15 crc kubenswrapper[4772]: I1122 12:27:15.414474 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:27:15 crc kubenswrapper[4772]: E1122 12:27:15.415081 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:27:16 crc kubenswrapper[4772]: I1122 12:27:16.411982 4772 scope.go:117] "RemoveContainer" containerID="8defb1f89090c0e2ebeb6977b0df5f2356761ddd4bf318252a98151a2143d55d" Nov 22 12:27:16 crc kubenswrapper[4772]: I1122 12:27:16.494833 4772 scope.go:117] "RemoveContainer" containerID="8dd4aa32c049b4473d15bee755251e231848ef358d0fca8576bc90f73db5e6e4" Nov 22 12:27:16 crc kubenswrapper[4772]: I1122 12:27:16.532657 4772 scope.go:117] "RemoveContainer" containerID="3b290fa8bf70de9cef440e3667cce7d32c6a6e2f8f30155f9275db16ce67cd50" Nov 22 12:27:27 crc kubenswrapper[4772]: I1122 12:27:27.417869 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:27:27 crc kubenswrapper[4772]: E1122 12:27:27.421367 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:27:39 crc kubenswrapper[4772]: I1122 12:27:39.417734 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:27:39 crc kubenswrapper[4772]: E1122 12:27:39.419203 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:27:52 crc kubenswrapper[4772]: I1122 12:27:52.414262 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:27:52 crc kubenswrapper[4772]: E1122 12:27:52.415157 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.397535 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rf8wr"] Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.402921 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.409150 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rf8wr"] Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.429466 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:28:07 crc kubenswrapper[4772]: E1122 12:28:07.438220 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.547203 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-utilities\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.547278 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjgdx\" (UniqueName: \"kubernetes.io/projected/e863f509-a065-41ed-8b93-a71694155274-kube-api-access-gjgdx\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.547329 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-catalog-content\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.650251 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-catalog-content\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.650637 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-utilities\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.650764 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjgdx\" (UniqueName: \"kubernetes.io/projected/e863f509-a065-41ed-8b93-a71694155274-kube-api-access-gjgdx\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.650860 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-catalog-content\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.650978 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-utilities\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.676824 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjgdx\" (UniqueName: \"kubernetes.io/projected/e863f509-a065-41ed-8b93-a71694155274-kube-api-access-gjgdx\") pod \"community-operators-rf8wr\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:07 crc kubenswrapper[4772]: I1122 12:28:07.741758 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:08 crc kubenswrapper[4772]: I1122 12:28:08.374440 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rf8wr"] Nov 22 12:28:08 crc kubenswrapper[4772]: I1122 12:28:08.598768 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf8wr" event={"ID":"e863f509-a065-41ed-8b93-a71694155274","Type":"ContainerStarted","Data":"faeda7ba351c502712c89d26acbbf6842e84de72f4bdbf7ce2bff0b4dad4c017"} Nov 22 12:28:09 crc kubenswrapper[4772]: I1122 12:28:09.612763 4772 generic.go:334] "Generic (PLEG): container finished" podID="e863f509-a065-41ed-8b93-a71694155274" containerID="16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184" exitCode=0 Nov 22 12:28:09 crc kubenswrapper[4772]: I1122 12:28:09.613191 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf8wr" event={"ID":"e863f509-a065-41ed-8b93-a71694155274","Type":"ContainerDied","Data":"16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184"} Nov 22 12:28:10 crc kubenswrapper[4772]: I1122 12:28:10.634907 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf8wr" event={"ID":"e863f509-a065-41ed-8b93-a71694155274","Type":"ContainerStarted","Data":"82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9"} Nov 22 12:28:14 crc kubenswrapper[4772]: I1122 12:28:14.681152 4772 generic.go:334] "Generic (PLEG): container finished" podID="e863f509-a065-41ed-8b93-a71694155274" containerID="82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9" exitCode=0 Nov 22 12:28:14 crc kubenswrapper[4772]: I1122 12:28:14.681274 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf8wr" event={"ID":"e863f509-a065-41ed-8b93-a71694155274","Type":"ContainerDied","Data":"82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9"} Nov 22 12:28:15 crc kubenswrapper[4772]: I1122 12:28:15.696936 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf8wr" event={"ID":"e863f509-a065-41ed-8b93-a71694155274","Type":"ContainerStarted","Data":"2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349"} Nov 22 12:28:15 crc kubenswrapper[4772]: I1122 12:28:15.724789 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rf8wr" podStartSLOduration=3.223728375 podStartE2EDuration="8.724766657s" podCreationTimestamp="2025-11-22 12:28:07 +0000 UTC" firstStartedPulling="2025-11-22 12:28:09.616974789 +0000 UTC m=+6609.856419283" lastFinishedPulling="2025-11-22 12:28:15.118013051 +0000 UTC m=+6615.357457565" observedRunningTime="2025-11-22 12:28:15.723421664 +0000 UTC m=+6615.962866178" watchObservedRunningTime="2025-11-22 12:28:15.724766657 +0000 UTC m=+6615.964211161" Nov 22 12:28:17 crc kubenswrapper[4772]: I1122 12:28:17.742735 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:17 crc kubenswrapper[4772]: I1122 12:28:17.743342 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:18 crc kubenswrapper[4772]: I1122 12:28:18.798107 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rf8wr" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="registry-server" probeResult="failure" output=< Nov 22 12:28:18 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 12:28:18 crc kubenswrapper[4772]: > Nov 22 12:28:19 crc kubenswrapper[4772]: I1122 12:28:19.414225 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:28:19 crc kubenswrapper[4772]: E1122 12:28:19.415315 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:28:27 crc kubenswrapper[4772]: I1122 12:28:27.816817 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:27 crc kubenswrapper[4772]: I1122 12:28:27.886927 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:28 crc kubenswrapper[4772]: I1122 12:28:28.072850 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rf8wr"] Nov 22 12:28:29 crc kubenswrapper[4772]: I1122 12:28:29.860463 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rf8wr" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="registry-server" containerID="cri-o://2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349" gracePeriod=2 Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.412015 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.465499 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-utilities\") pod \"e863f509-a065-41ed-8b93-a71694155274\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.465667 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-catalog-content\") pod \"e863f509-a065-41ed-8b93-a71694155274\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.465730 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjgdx\" (UniqueName: \"kubernetes.io/projected/e863f509-a065-41ed-8b93-a71694155274-kube-api-access-gjgdx\") pod \"e863f509-a065-41ed-8b93-a71694155274\" (UID: \"e863f509-a065-41ed-8b93-a71694155274\") " Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.467393 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-utilities" (OuterVolumeSpecName: "utilities") pod "e863f509-a065-41ed-8b93-a71694155274" (UID: "e863f509-a065-41ed-8b93-a71694155274"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.474795 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e863f509-a065-41ed-8b93-a71694155274-kube-api-access-gjgdx" (OuterVolumeSpecName: "kube-api-access-gjgdx") pod "e863f509-a065-41ed-8b93-a71694155274" (UID: "e863f509-a065-41ed-8b93-a71694155274"). InnerVolumeSpecName "kube-api-access-gjgdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.536563 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e863f509-a065-41ed-8b93-a71694155274" (UID: "e863f509-a065-41ed-8b93-a71694155274"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.568424 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.568919 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjgdx\" (UniqueName: \"kubernetes.io/projected/e863f509-a065-41ed-8b93-a71694155274-kube-api-access-gjgdx\") on node \"crc\" DevicePath \"\"" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.568931 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e863f509-a065-41ed-8b93-a71694155274-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.879859 4772 generic.go:334] "Generic (PLEG): container finished" podID="e863f509-a065-41ed-8b93-a71694155274" containerID="2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349" exitCode=0 Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.879914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf8wr" event={"ID":"e863f509-a065-41ed-8b93-a71694155274","Type":"ContainerDied","Data":"2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349"} Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.879959 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rf8wr" event={"ID":"e863f509-a065-41ed-8b93-a71694155274","Type":"ContainerDied","Data":"faeda7ba351c502712c89d26acbbf6842e84de72f4bdbf7ce2bff0b4dad4c017"} Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.879987 4772 scope.go:117] "RemoveContainer" containerID="2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.879984 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rf8wr" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.926181 4772 scope.go:117] "RemoveContainer" containerID="82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.929088 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rf8wr"] Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.937824 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rf8wr"] Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.947501 4772 scope.go:117] "RemoveContainer" containerID="16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.994757 4772 scope.go:117] "RemoveContainer" containerID="2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349" Nov 22 12:28:30 crc kubenswrapper[4772]: E1122 12:28:30.995278 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349\": container with ID starting with 2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349 not found: ID does not exist" containerID="2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.995310 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349"} err="failed to get container status \"2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349\": rpc error: code = NotFound desc = could not find container \"2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349\": container with ID starting with 2e22c372128dfbf820bfa8992cc8171f4a80f69185123f1644bc956592d52349 not found: ID does not exist" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.995333 4772 scope.go:117] "RemoveContainer" containerID="82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9" Nov 22 12:28:30 crc kubenswrapper[4772]: E1122 12:28:30.995748 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9\": container with ID starting with 82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9 not found: ID does not exist" containerID="82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.995792 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9"} err="failed to get container status \"82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9\": rpc error: code = NotFound desc = could not find container \"82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9\": container with ID starting with 82344ae3fa994d71497406c841518b03780e60903770293cdddb6520627d3dd9 not found: ID does not exist" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.995819 4772 scope.go:117] "RemoveContainer" containerID="16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184" Nov 22 12:28:30 crc kubenswrapper[4772]: E1122 12:28:30.996217 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184\": container with ID starting with 16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184 not found: ID does not exist" containerID="16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184" Nov 22 12:28:30 crc kubenswrapper[4772]: I1122 12:28:30.996259 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184"} err="failed to get container status \"16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184\": rpc error: code = NotFound desc = could not find container \"16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184\": container with ID starting with 16a29d558d472dd7a6ebdf763b9c472666cf3b38d43a221d45e389144b3a4184 not found: ID does not exist" Nov 22 12:28:31 crc kubenswrapper[4772]: I1122 12:28:31.441153 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e863f509-a065-41ed-8b93-a71694155274" path="/var/lib/kubelet/pods/e863f509-a065-41ed-8b93-a71694155274/volumes" Nov 22 12:28:32 crc kubenswrapper[4772]: I1122 12:28:32.414728 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:28:32 crc kubenswrapper[4772]: E1122 12:28:32.415555 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:28:44 crc kubenswrapper[4772]: I1122 12:28:44.423118 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:28:44 crc kubenswrapper[4772]: E1122 12:28:44.424025 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:28:56 crc kubenswrapper[4772]: I1122 12:28:56.413594 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:28:56 crc kubenswrapper[4772]: E1122 12:28:56.414624 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.350657 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g8md5"] Nov 22 12:29:00 crc kubenswrapper[4772]: E1122 12:29:00.351806 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="registry-server" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.351824 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="registry-server" Nov 22 12:29:00 crc kubenswrapper[4772]: E1122 12:29:00.351840 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="extract-utilities" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.351848 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="extract-utilities" Nov 22 12:29:00 crc kubenswrapper[4772]: E1122 12:29:00.351891 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="extract-content" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.351899 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="extract-content" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.352186 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e863f509-a065-41ed-8b93-a71694155274" containerName="registry-server" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.354289 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.364357 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8md5"] Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.513338 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-utilities\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.513784 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n777\" (UniqueName: \"kubernetes.io/projected/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-kube-api-access-6n777\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.513873 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-catalog-content\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.615735 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-catalog-content\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.615834 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-utilities\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.615945 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n777\" (UniqueName: \"kubernetes.io/projected/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-kube-api-access-6n777\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.616828 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-catalog-content\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.617139 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-utilities\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.636179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n777\" (UniqueName: \"kubernetes.io/projected/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-kube-api-access-6n777\") pod \"redhat-operators-g8md5\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:00 crc kubenswrapper[4772]: I1122 12:29:00.683454 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:01 crc kubenswrapper[4772]: I1122 12:29:01.221301 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8md5"] Nov 22 12:29:01 crc kubenswrapper[4772]: I1122 12:29:01.576492 4772 generic.go:334] "Generic (PLEG): container finished" podID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerID="ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660" exitCode=0 Nov 22 12:29:01 crc kubenswrapper[4772]: I1122 12:29:01.576828 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8md5" event={"ID":"a8cd1b86-24d1-4ab6-bd7a-4440f674a831","Type":"ContainerDied","Data":"ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660"} Nov 22 12:29:01 crc kubenswrapper[4772]: I1122 12:29:01.576855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8md5" event={"ID":"a8cd1b86-24d1-4ab6-bd7a-4440f674a831","Type":"ContainerStarted","Data":"6fa3428ec80cf8312ec374bf7834d8afdeda9d50261d34135c6f055e0a842979"} Nov 22 12:29:01 crc kubenswrapper[4772]: I1122 12:29:01.578651 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:29:02 crc kubenswrapper[4772]: I1122 12:29:02.590431 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8md5" event={"ID":"a8cd1b86-24d1-4ab6-bd7a-4440f674a831","Type":"ContainerStarted","Data":"fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5"} Nov 22 12:29:06 crc kubenswrapper[4772]: I1122 12:29:06.659624 4772 generic.go:334] "Generic (PLEG): container finished" podID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerID="fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5" exitCode=0 Nov 22 12:29:06 crc kubenswrapper[4772]: I1122 12:29:06.659668 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8md5" event={"ID":"a8cd1b86-24d1-4ab6-bd7a-4440f674a831","Type":"ContainerDied","Data":"fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5"} Nov 22 12:29:08 crc kubenswrapper[4772]: I1122 12:29:08.691486 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8md5" event={"ID":"a8cd1b86-24d1-4ab6-bd7a-4440f674a831","Type":"ContainerStarted","Data":"19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37"} Nov 22 12:29:08 crc kubenswrapper[4772]: I1122 12:29:08.729999 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g8md5" podStartSLOduration=2.191056352 podStartE2EDuration="8.729972801s" podCreationTimestamp="2025-11-22 12:29:00 +0000 UTC" firstStartedPulling="2025-11-22 12:29:01.57842073 +0000 UTC m=+6661.817865224" lastFinishedPulling="2025-11-22 12:29:08.117337179 +0000 UTC m=+6668.356781673" observedRunningTime="2025-11-22 12:29:08.7173637 +0000 UTC m=+6668.956808214" watchObservedRunningTime="2025-11-22 12:29:08.729972801 +0000 UTC m=+6668.969417315" Nov 22 12:29:09 crc kubenswrapper[4772]: I1122 12:29:09.413911 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:29:09 crc kubenswrapper[4772]: E1122 12:29:09.414352 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:29:10 crc kubenswrapper[4772]: I1122 12:29:10.683976 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:10 crc kubenswrapper[4772]: I1122 12:29:10.689074 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:11 crc kubenswrapper[4772]: I1122 12:29:11.739460 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g8md5" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="registry-server" probeResult="failure" output=< Nov 22 12:29:11 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 12:29:11 crc kubenswrapper[4772]: > Nov 22 12:29:20 crc kubenswrapper[4772]: I1122 12:29:20.761842 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:20 crc kubenswrapper[4772]: I1122 12:29:20.867849 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:21 crc kubenswrapper[4772]: I1122 12:29:21.017994 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8md5"] Nov 22 12:29:21 crc kubenswrapper[4772]: I1122 12:29:21.893351 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g8md5" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="registry-server" containerID="cri-o://19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37" gracePeriod=2 Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.477860 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.580329 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-utilities\") pod \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.580470 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-catalog-content\") pod \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.580567 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n777\" (UniqueName: \"kubernetes.io/projected/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-kube-api-access-6n777\") pod \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\" (UID: \"a8cd1b86-24d1-4ab6-bd7a-4440f674a831\") " Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.581593 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-utilities" (OuterVolumeSpecName: "utilities") pod "a8cd1b86-24d1-4ab6-bd7a-4440f674a831" (UID: "a8cd1b86-24d1-4ab6-bd7a-4440f674a831"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.581860 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.605403 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-kube-api-access-6n777" (OuterVolumeSpecName: "kube-api-access-6n777") pod "a8cd1b86-24d1-4ab6-bd7a-4440f674a831" (UID: "a8cd1b86-24d1-4ab6-bd7a-4440f674a831"). InnerVolumeSpecName "kube-api-access-6n777". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.679453 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8cd1b86-24d1-4ab6-bd7a-4440f674a831" (UID: "a8cd1b86-24d1-4ab6-bd7a-4440f674a831"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.684075 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n777\" (UniqueName: \"kubernetes.io/projected/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-kube-api-access-6n777\") on node \"crc\" DevicePath \"\"" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.684097 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8cd1b86-24d1-4ab6-bd7a-4440f674a831-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.907572 4772 generic.go:334] "Generic (PLEG): container finished" podID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerID="19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37" exitCode=0 Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.907646 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8md5" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.907669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8md5" event={"ID":"a8cd1b86-24d1-4ab6-bd7a-4440f674a831","Type":"ContainerDied","Data":"19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37"} Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.907725 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8md5" event={"ID":"a8cd1b86-24d1-4ab6-bd7a-4440f674a831","Type":"ContainerDied","Data":"6fa3428ec80cf8312ec374bf7834d8afdeda9d50261d34135c6f055e0a842979"} Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.907744 4772 scope.go:117] "RemoveContainer" containerID="19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.933356 4772 scope.go:117] "RemoveContainer" containerID="fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5" Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.950541 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8md5"] Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.961486 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g8md5"] Nov 22 12:29:22 crc kubenswrapper[4772]: I1122 12:29:22.979815 4772 scope.go:117] "RemoveContainer" containerID="ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660" Nov 22 12:29:23 crc kubenswrapper[4772]: I1122 12:29:23.013141 4772 scope.go:117] "RemoveContainer" containerID="19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37" Nov 22 12:29:23 crc kubenswrapper[4772]: E1122 12:29:23.013650 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37\": container with ID starting with 19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37 not found: ID does not exist" containerID="19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37" Nov 22 12:29:23 crc kubenswrapper[4772]: I1122 12:29:23.013801 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37"} err="failed to get container status \"19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37\": rpc error: code = NotFound desc = could not find container \"19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37\": container with ID starting with 19908557cc52bea71a2c56d6219d6ce03aada14ae1a49bbc43c8000774a33a37 not found: ID does not exist" Nov 22 12:29:23 crc kubenswrapper[4772]: I1122 12:29:23.013933 4772 scope.go:117] "RemoveContainer" containerID="fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5" Nov 22 12:29:23 crc kubenswrapper[4772]: E1122 12:29:23.014431 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5\": container with ID starting with fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5 not found: ID does not exist" containerID="fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5" Nov 22 12:29:23 crc kubenswrapper[4772]: I1122 12:29:23.014570 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5"} err="failed to get container status \"fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5\": rpc error: code = NotFound desc = could not find container \"fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5\": container with ID starting with fb6ae3fd9ff2de8fb4849abddc27d3779410eb5768b73ce4c9afbc6e11ae0af5 not found: ID does not exist" Nov 22 12:29:23 crc kubenswrapper[4772]: I1122 12:29:23.014957 4772 scope.go:117] "RemoveContainer" containerID="ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660" Nov 22 12:29:23 crc kubenswrapper[4772]: E1122 12:29:23.015382 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660\": container with ID starting with ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660 not found: ID does not exist" containerID="ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660" Nov 22 12:29:23 crc kubenswrapper[4772]: I1122 12:29:23.015525 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660"} err="failed to get container status \"ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660\": rpc error: code = NotFound desc = could not find container \"ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660\": container with ID starting with ae0107814ddadac8825511e3ea6d2fd9908906961418d05552f28adc781f6660 not found: ID does not exist" Nov 22 12:29:23 crc kubenswrapper[4772]: I1122 12:29:23.430491 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" path="/var/lib/kubelet/pods/a8cd1b86-24d1-4ab6-bd7a-4440f674a831/volumes" Nov 22 12:29:24 crc kubenswrapper[4772]: I1122 12:29:24.414244 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:29:24 crc kubenswrapper[4772]: E1122 12:29:24.414680 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:29:36 crc kubenswrapper[4772]: I1122 12:29:36.414970 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:29:36 crc kubenswrapper[4772]: E1122 12:29:36.416130 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:29:48 crc kubenswrapper[4772]: I1122 12:29:48.413742 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:29:48 crc kubenswrapper[4772]: E1122 12:29:48.414905 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.178848 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh"] Nov 22 12:30:00 crc kubenswrapper[4772]: E1122 12:30:00.181449 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="registry-server" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.181573 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="registry-server" Nov 22 12:30:00 crc kubenswrapper[4772]: E1122 12:30:00.181702 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="extract-content" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.181793 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="extract-content" Nov 22 12:30:00 crc kubenswrapper[4772]: E1122 12:30:00.181967 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="extract-utilities" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.182090 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="extract-utilities" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.182680 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8cd1b86-24d1-4ab6-bd7a-4440f674a831" containerName="registry-server" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.183779 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.186896 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.186975 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.190501 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh"] Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.249850 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmbf\" (UniqueName: \"kubernetes.io/projected/32bbb8cc-2577-4f70-9a10-682232f0b57b-kube-api-access-6jmbf\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.249953 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32bbb8cc-2577-4f70-9a10-682232f0b57b-secret-volume\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.250058 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32bbb8cc-2577-4f70-9a10-682232f0b57b-config-volume\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.351751 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jmbf\" (UniqueName: \"kubernetes.io/projected/32bbb8cc-2577-4f70-9a10-682232f0b57b-kube-api-access-6jmbf\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.351851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32bbb8cc-2577-4f70-9a10-682232f0b57b-secret-volume\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.351899 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32bbb8cc-2577-4f70-9a10-682232f0b57b-config-volume\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.353197 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32bbb8cc-2577-4f70-9a10-682232f0b57b-config-volume\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.365776 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32bbb8cc-2577-4f70-9a10-682232f0b57b-secret-volume\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.368700 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jmbf\" (UniqueName: \"kubernetes.io/projected/32bbb8cc-2577-4f70-9a10-682232f0b57b-kube-api-access-6jmbf\") pod \"collect-profiles-29396910-ww9lh\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:00 crc kubenswrapper[4772]: I1122 12:30:00.510070 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:01 crc kubenswrapper[4772]: I1122 12:30:01.021705 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh"] Nov 22 12:30:01 crc kubenswrapper[4772]: W1122 12:30:01.027035 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32bbb8cc_2577_4f70_9a10_682232f0b57b.slice/crio-38f6bd7f6444925d91bf4da3632900434aedead0fdb081627d0b178f8c38d514 WatchSource:0}: Error finding container 38f6bd7f6444925d91bf4da3632900434aedead0fdb081627d0b178f8c38d514: Status 404 returned error can't find the container with id 38f6bd7f6444925d91bf4da3632900434aedead0fdb081627d0b178f8c38d514 Nov 22 12:30:01 crc kubenswrapper[4772]: I1122 12:30:01.368088 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" event={"ID":"32bbb8cc-2577-4f70-9a10-682232f0b57b","Type":"ContainerStarted","Data":"94e68aaaf2f221e0f9ef78189cd4909dd7b3d7c85c2c1fa032ac350c1b3086cd"} Nov 22 12:30:01 crc kubenswrapper[4772]: I1122 12:30:01.368140 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" event={"ID":"32bbb8cc-2577-4f70-9a10-682232f0b57b","Type":"ContainerStarted","Data":"38f6bd7f6444925d91bf4da3632900434aedead0fdb081627d0b178f8c38d514"} Nov 22 12:30:01 crc kubenswrapper[4772]: I1122 12:30:01.387914 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" podStartSLOduration=1.387890402 podStartE2EDuration="1.387890402s" podCreationTimestamp="2025-11-22 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 12:30:01.379884314 +0000 UTC m=+6721.619328818" watchObservedRunningTime="2025-11-22 12:30:01.387890402 +0000 UTC m=+6721.627334906" Nov 22 12:30:02 crc kubenswrapper[4772]: I1122 12:30:02.383709 4772 generic.go:334] "Generic (PLEG): container finished" podID="32bbb8cc-2577-4f70-9a10-682232f0b57b" containerID="94e68aaaf2f221e0f9ef78189cd4909dd7b3d7c85c2c1fa032ac350c1b3086cd" exitCode=0 Nov 22 12:30:02 crc kubenswrapper[4772]: I1122 12:30:02.383811 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" event={"ID":"32bbb8cc-2577-4f70-9a10-682232f0b57b","Type":"ContainerDied","Data":"94e68aaaf2f221e0f9ef78189cd4909dd7b3d7c85c2c1fa032ac350c1b3086cd"} Nov 22 12:30:02 crc kubenswrapper[4772]: I1122 12:30:02.416524 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:30:02 crc kubenswrapper[4772]: E1122 12:30:02.417704 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:30:03 crc kubenswrapper[4772]: I1122 12:30:03.888762 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.049110 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jmbf\" (UniqueName: \"kubernetes.io/projected/32bbb8cc-2577-4f70-9a10-682232f0b57b-kube-api-access-6jmbf\") pod \"32bbb8cc-2577-4f70-9a10-682232f0b57b\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.049195 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32bbb8cc-2577-4f70-9a10-682232f0b57b-config-volume\") pod \"32bbb8cc-2577-4f70-9a10-682232f0b57b\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.049265 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32bbb8cc-2577-4f70-9a10-682232f0b57b-secret-volume\") pod \"32bbb8cc-2577-4f70-9a10-682232f0b57b\" (UID: \"32bbb8cc-2577-4f70-9a10-682232f0b57b\") " Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.050400 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bbb8cc-2577-4f70-9a10-682232f0b57b-config-volume" (OuterVolumeSpecName: "config-volume") pod "32bbb8cc-2577-4f70-9a10-682232f0b57b" (UID: "32bbb8cc-2577-4f70-9a10-682232f0b57b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.058209 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bbb8cc-2577-4f70-9a10-682232f0b57b-kube-api-access-6jmbf" (OuterVolumeSpecName: "kube-api-access-6jmbf") pod "32bbb8cc-2577-4f70-9a10-682232f0b57b" (UID: "32bbb8cc-2577-4f70-9a10-682232f0b57b"). InnerVolumeSpecName "kube-api-access-6jmbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.059340 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bbb8cc-2577-4f70-9a10-682232f0b57b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "32bbb8cc-2577-4f70-9a10-682232f0b57b" (UID: "32bbb8cc-2577-4f70-9a10-682232f0b57b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.152790 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jmbf\" (UniqueName: \"kubernetes.io/projected/32bbb8cc-2577-4f70-9a10-682232f0b57b-kube-api-access-6jmbf\") on node \"crc\" DevicePath \"\"" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.153318 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32bbb8cc-2577-4f70-9a10-682232f0b57b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.153354 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32bbb8cc-2577-4f70-9a10-682232f0b57b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.434591 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" event={"ID":"32bbb8cc-2577-4f70-9a10-682232f0b57b","Type":"ContainerDied","Data":"38f6bd7f6444925d91bf4da3632900434aedead0fdb081627d0b178f8c38d514"} Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.434667 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f6bd7f6444925d91bf4da3632900434aedead0fdb081627d0b178f8c38d514" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.434760 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh" Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.480794 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw"] Nov 22 12:30:04 crc kubenswrapper[4772]: I1122 12:30:04.495556 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396865-8fmbw"] Nov 22 12:30:05 crc kubenswrapper[4772]: I1122 12:30:05.431269 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58350978-93aa-401c-8930-a3c1b2550917" path="/var/lib/kubelet/pods/58350978-93aa-401c-8930-a3c1b2550917/volumes" Nov 22 12:30:12 crc kubenswrapper[4772]: I1122 12:30:12.042995 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-kv7sj"] Nov 22 12:30:12 crc kubenswrapper[4772]: I1122 12:30:12.056528 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-kv7sj"] Nov 22 12:30:13 crc kubenswrapper[4772]: I1122 12:30:13.424028 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401a4db8-bda5-4a1c-a4b1-6f054baa0f0e" path="/var/lib/kubelet/pods/401a4db8-bda5-4a1c-a4b1-6f054baa0f0e/volumes" Nov 22 12:30:16 crc kubenswrapper[4772]: I1122 12:30:16.716740 4772 scope.go:117] "RemoveContainer" containerID="d574924aea20dae214b4ce48d79870e2b8fb87fe7bdea2b8b98300c05ede4d3d" Nov 22 12:30:16 crc kubenswrapper[4772]: I1122 12:30:16.750650 4772 scope.go:117] "RemoveContainer" containerID="5df4f8336d4ae5c7179cb4bb886bd4ef68c6f683fb446f7c13985af4b4423647" Nov 22 12:30:17 crc kubenswrapper[4772]: I1122 12:30:17.414446 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:30:17 crc kubenswrapper[4772]: E1122 12:30:17.415241 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:30:22 crc kubenswrapper[4772]: I1122 12:30:22.037102 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-91be-account-create-nj6nz"] Nov 22 12:30:22 crc kubenswrapper[4772]: I1122 12:30:22.049978 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-91be-account-create-nj6nz"] Nov 22 12:30:23 crc kubenswrapper[4772]: I1122 12:30:23.426867 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79510b19-44ef-40ed-8f20-0f8f43673e6b" path="/var/lib/kubelet/pods/79510b19-44ef-40ed-8f20-0f8f43673e6b/volumes" Nov 22 12:30:28 crc kubenswrapper[4772]: I1122 12:30:28.414672 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:30:28 crc kubenswrapper[4772]: E1122 12:30:28.415763 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:30:36 crc kubenswrapper[4772]: I1122 12:30:36.047256 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-8cjvn"] Nov 22 12:30:36 crc kubenswrapper[4772]: I1122 12:30:36.056613 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-8cjvn"] Nov 22 12:30:37 crc kubenswrapper[4772]: I1122 12:30:37.427217 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9f7e09-5900-441f-b877-40f36f5eaa50" path="/var/lib/kubelet/pods/de9f7e09-5900-441f-b877-40f36f5eaa50/volumes" Nov 22 12:30:43 crc kubenswrapper[4772]: I1122 12:30:43.414137 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:30:43 crc kubenswrapper[4772]: E1122 12:30:43.415559 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:30:54 crc kubenswrapper[4772]: I1122 12:30:54.413904 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:30:54 crc kubenswrapper[4772]: E1122 12:30:54.414680 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:31:06 crc kubenswrapper[4772]: I1122 12:31:06.414075 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:31:06 crc kubenswrapper[4772]: E1122 12:31:06.415332 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:31:16 crc kubenswrapper[4772]: I1122 12:31:16.954592 4772 scope.go:117] "RemoveContainer" containerID="a13eb4f77eaa899b929e0e319953286ac76c13f5f706b402565653b5501c87fd" Nov 22 12:31:17 crc kubenswrapper[4772]: I1122 12:31:17.013310 4772 scope.go:117] "RemoveContainer" containerID="9a5560179fa9863628d10042c248d514748e422ba4f83d4a355fae87943d3f44" Nov 22 12:31:19 crc kubenswrapper[4772]: I1122 12:31:19.413742 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:31:19 crc kubenswrapper[4772]: E1122 12:31:19.414837 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:31:30 crc kubenswrapper[4772]: I1122 12:31:30.413406 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:31:30 crc kubenswrapper[4772]: E1122 12:31:30.414556 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:31:44 crc kubenswrapper[4772]: I1122 12:31:44.413775 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:31:45 crc kubenswrapper[4772]: I1122 12:31:45.615767 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"8158c00a0432c0447fd431172037b67d74a8cf1c2bbd496669c8a8e887acbee1"} Nov 22 12:32:45 crc kubenswrapper[4772]: I1122 12:32:45.068563 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-52rg9"] Nov 22 12:32:45 crc kubenswrapper[4772]: I1122 12:32:45.082978 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-52rg9"] Nov 22 12:32:45 crc kubenswrapper[4772]: I1122 12:32:45.437396 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9109a9d1-b9a6-45a3-a515-21940fedf6b4" path="/var/lib/kubelet/pods/9109a9d1-b9a6-45a3-a515-21940fedf6b4/volumes" Nov 22 12:32:55 crc kubenswrapper[4772]: I1122 12:32:55.029701 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-6efd-account-create-54tm2"] Nov 22 12:32:55 crc kubenswrapper[4772]: I1122 12:32:55.037722 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-6efd-account-create-54tm2"] Nov 22 12:32:55 crc kubenswrapper[4772]: I1122 12:32:55.453459 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92c433c-6bc6-4dbe-91a0-69a74202f6ab" path="/var/lib/kubelet/pods/a92c433c-6bc6-4dbe-91a0-69a74202f6ab/volumes" Nov 22 12:33:07 crc kubenswrapper[4772]: I1122 12:33:07.064704 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-7q8jn"] Nov 22 12:33:07 crc kubenswrapper[4772]: I1122 12:33:07.079682 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-7q8jn"] Nov 22 12:33:07 crc kubenswrapper[4772]: I1122 12:33:07.430198 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65" path="/var/lib/kubelet/pods/f91531fa-27a1-4cf1-8ad5-1ebeec0dbd65/volumes" Nov 22 12:33:17 crc kubenswrapper[4772]: I1122 12:33:17.158875 4772 scope.go:117] "RemoveContainer" containerID="133804c48f0a5a5dd9af802d337c2895f94fb39efc74b734ecd32426494a5bc5" Nov 22 12:33:17 crc kubenswrapper[4772]: I1122 12:33:17.196964 4772 scope.go:117] "RemoveContainer" containerID="005d2caae5d2920983d56de9a0b7dc5e5e868a1e28342200bef559eeaaccc522" Nov 22 12:33:17 crc kubenswrapper[4772]: I1122 12:33:17.246002 4772 scope.go:117] "RemoveContainer" containerID="ce10a8ed902fef7685fd59e725fb4f402a78a9b2cae51ce1984b40f1e402438a" Nov 22 12:33:31 crc kubenswrapper[4772]: I1122 12:33:31.047734 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-n5877"] Nov 22 12:33:31 crc kubenswrapper[4772]: I1122 12:33:31.059386 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-n5877"] Nov 22 12:33:31 crc kubenswrapper[4772]: I1122 12:33:31.441466 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785e25f1-796b-4d4d-af36-cc9c0e3ea31c" path="/var/lib/kubelet/pods/785e25f1-796b-4d4d-af36-cc9c0e3ea31c/volumes" Nov 22 12:33:41 crc kubenswrapper[4772]: I1122 12:33:41.048769 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-d683-account-create-t8jwr"] Nov 22 12:33:41 crc kubenswrapper[4772]: I1122 12:33:41.059954 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-d683-account-create-t8jwr"] Nov 22 12:33:41 crc kubenswrapper[4772]: I1122 12:33:41.428202 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5660a09-2828-4375-9257-39c260306ef4" path="/var/lib/kubelet/pods/a5660a09-2828-4375-9257-39c260306ef4/volumes" Nov 22 12:33:54 crc kubenswrapper[4772]: I1122 12:33:54.033659 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-m229v"] Nov 22 12:33:54 crc kubenswrapper[4772]: I1122 12:33:54.043383 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-m229v"] Nov 22 12:33:55 crc kubenswrapper[4772]: I1122 12:33:55.431917 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08576e88-9d75-4ec1-9388-64ed8e991e48" path="/var/lib/kubelet/pods/08576e88-9d75-4ec1-9388-64ed8e991e48/volumes" Nov 22 12:34:01 crc kubenswrapper[4772]: I1122 12:34:01.533260 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:34:01 crc kubenswrapper[4772]: I1122 12:34:01.533687 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:34:17 crc kubenswrapper[4772]: I1122 12:34:17.394427 4772 scope.go:117] "RemoveContainer" containerID="5c9450175cae686c1ae0c954a73b8ef458b6a8f65d8f09ea29c3add4c79f0d57" Nov 22 12:34:17 crc kubenswrapper[4772]: I1122 12:34:17.442505 4772 scope.go:117] "RemoveContainer" containerID="2c99d0074e4f1e4bcc7a47092f770ec1294698d0778e6bfe394cd98fce6e6df6" Nov 22 12:34:17 crc kubenswrapper[4772]: I1122 12:34:17.475412 4772 scope.go:117] "RemoveContainer" containerID="4a88317ae96db92ca84848ad077241aed98bc221b0b9635fc1c594b791e88024" Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.787576 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zn8jd"] Nov 22 12:34:25 crc kubenswrapper[4772]: E1122 12:34:25.789095 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bbb8cc-2577-4f70-9a10-682232f0b57b" containerName="collect-profiles" Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.789121 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bbb8cc-2577-4f70-9a10-682232f0b57b" containerName="collect-profiles" Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.789439 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bbb8cc-2577-4f70-9a10-682232f0b57b" containerName="collect-profiles" Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.791393 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.843188 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn8jd"] Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.904460 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-catalog-content\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.904513 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-utilities\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:25 crc kubenswrapper[4772]: I1122 12:34:25.904751 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctmhz\" (UniqueName: \"kubernetes.io/projected/f9396be5-4bb2-424a-a60c-1d1065393eb9-kube-api-access-ctmhz\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.007529 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctmhz\" (UniqueName: \"kubernetes.io/projected/f9396be5-4bb2-424a-a60c-1d1065393eb9-kube-api-access-ctmhz\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.007744 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-catalog-content\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.007779 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-utilities\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.008375 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-catalog-content\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.008582 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-utilities\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.039737 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctmhz\" (UniqueName: \"kubernetes.io/projected/f9396be5-4bb2-424a-a60c-1d1065393eb9-kube-api-access-ctmhz\") pod \"redhat-marketplace-zn8jd\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.117159 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:26 crc kubenswrapper[4772]: I1122 12:34:26.624241 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn8jd"] Nov 22 12:34:27 crc kubenswrapper[4772]: I1122 12:34:27.555937 4772 generic.go:334] "Generic (PLEG): container finished" podID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerID="43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0" exitCode=0 Nov 22 12:34:27 crc kubenswrapper[4772]: I1122 12:34:27.556008 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn8jd" event={"ID":"f9396be5-4bb2-424a-a60c-1d1065393eb9","Type":"ContainerDied","Data":"43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0"} Nov 22 12:34:27 crc kubenswrapper[4772]: I1122 12:34:27.556642 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn8jd" event={"ID":"f9396be5-4bb2-424a-a60c-1d1065393eb9","Type":"ContainerStarted","Data":"235a48c586d1ff6fabff9137f58276ee626c5dce7472725df61cfbe71bbf87a7"} Nov 22 12:34:27 crc kubenswrapper[4772]: I1122 12:34:27.559611 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:34:28 crc kubenswrapper[4772]: I1122 12:34:28.583521 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn8jd" event={"ID":"f9396be5-4bb2-424a-a60c-1d1065393eb9","Type":"ContainerStarted","Data":"6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8"} Nov 22 12:34:29 crc kubenswrapper[4772]: I1122 12:34:29.596516 4772 generic.go:334] "Generic (PLEG): container finished" podID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerID="6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8" exitCode=0 Nov 22 12:34:29 crc kubenswrapper[4772]: I1122 12:34:29.596611 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn8jd" event={"ID":"f9396be5-4bb2-424a-a60c-1d1065393eb9","Type":"ContainerDied","Data":"6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8"} Nov 22 12:34:30 crc kubenswrapper[4772]: I1122 12:34:30.612558 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn8jd" event={"ID":"f9396be5-4bb2-424a-a60c-1d1065393eb9","Type":"ContainerStarted","Data":"b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c"} Nov 22 12:34:30 crc kubenswrapper[4772]: I1122 12:34:30.634508 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zn8jd" podStartSLOduration=3.180675317 podStartE2EDuration="5.634488764s" podCreationTimestamp="2025-11-22 12:34:25 +0000 UTC" firstStartedPulling="2025-11-22 12:34:27.559360461 +0000 UTC m=+6987.798804955" lastFinishedPulling="2025-11-22 12:34:30.013173908 +0000 UTC m=+6990.252618402" observedRunningTime="2025-11-22 12:34:30.630952977 +0000 UTC m=+6990.870397481" watchObservedRunningTime="2025-11-22 12:34:30.634488764 +0000 UTC m=+6990.873933258" Nov 22 12:34:31 crc kubenswrapper[4772]: I1122 12:34:31.533290 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:34:31 crc kubenswrapper[4772]: I1122 12:34:31.533400 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:34:36 crc kubenswrapper[4772]: I1122 12:34:36.117848 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:36 crc kubenswrapper[4772]: I1122 12:34:36.118713 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:36 crc kubenswrapper[4772]: I1122 12:34:36.182202 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:36 crc kubenswrapper[4772]: I1122 12:34:36.740625 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:36 crc kubenswrapper[4772]: I1122 12:34:36.807183 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn8jd"] Nov 22 12:34:38 crc kubenswrapper[4772]: I1122 12:34:38.700936 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zn8jd" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="registry-server" containerID="cri-o://b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c" gracePeriod=2 Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.266518 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.463195 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-catalog-content\") pod \"f9396be5-4bb2-424a-a60c-1d1065393eb9\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.463333 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctmhz\" (UniqueName: \"kubernetes.io/projected/f9396be5-4bb2-424a-a60c-1d1065393eb9-kube-api-access-ctmhz\") pod \"f9396be5-4bb2-424a-a60c-1d1065393eb9\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.463424 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-utilities\") pod \"f9396be5-4bb2-424a-a60c-1d1065393eb9\" (UID: \"f9396be5-4bb2-424a-a60c-1d1065393eb9\") " Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.464768 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-utilities" (OuterVolumeSpecName: "utilities") pod "f9396be5-4bb2-424a-a60c-1d1065393eb9" (UID: "f9396be5-4bb2-424a-a60c-1d1065393eb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.507895 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9396be5-4bb2-424a-a60c-1d1065393eb9" (UID: "f9396be5-4bb2-424a-a60c-1d1065393eb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.566576 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.566603 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9396be5-4bb2-424a-a60c-1d1065393eb9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.693190 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9396be5-4bb2-424a-a60c-1d1065393eb9-kube-api-access-ctmhz" (OuterVolumeSpecName: "kube-api-access-ctmhz") pod "f9396be5-4bb2-424a-a60c-1d1065393eb9" (UID: "f9396be5-4bb2-424a-a60c-1d1065393eb9"). InnerVolumeSpecName "kube-api-access-ctmhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.714480 4772 generic.go:334] "Generic (PLEG): container finished" podID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerID="b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c" exitCode=0 Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.714524 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn8jd" event={"ID":"f9396be5-4bb2-424a-a60c-1d1065393eb9","Type":"ContainerDied","Data":"b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c"} Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.714552 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn8jd" event={"ID":"f9396be5-4bb2-424a-a60c-1d1065393eb9","Type":"ContainerDied","Data":"235a48c586d1ff6fabff9137f58276ee626c5dce7472725df61cfbe71bbf87a7"} Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.714570 4772 scope.go:117] "RemoveContainer" containerID="b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.714580 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn8jd" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.743885 4772 scope.go:117] "RemoveContainer" containerID="6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.763909 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn8jd"] Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.771688 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctmhz\" (UniqueName: \"kubernetes.io/projected/f9396be5-4bb2-424a-a60c-1d1065393eb9-kube-api-access-ctmhz\") on node \"crc\" DevicePath \"\"" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.778925 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn8jd"] Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.780502 4772 scope.go:117] "RemoveContainer" containerID="43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.839370 4772 scope.go:117] "RemoveContainer" containerID="b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c" Nov 22 12:34:39 crc kubenswrapper[4772]: E1122 12:34:39.841770 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c\": container with ID starting with b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c not found: ID does not exist" containerID="b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.841827 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c"} err="failed to get container status \"b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c\": rpc error: code = NotFound desc = could not find container \"b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c\": container with ID starting with b5ff0c58bc3c64d625680d6a0c92df913db89c553f381257c429e24ccb57d48c not found: ID does not exist" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.841861 4772 scope.go:117] "RemoveContainer" containerID="6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8" Nov 22 12:34:39 crc kubenswrapper[4772]: E1122 12:34:39.842363 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8\": container with ID starting with 6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8 not found: ID does not exist" containerID="6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.842417 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8"} err="failed to get container status \"6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8\": rpc error: code = NotFound desc = could not find container \"6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8\": container with ID starting with 6cd1540be04fda8574b525093de6c04b108a8f14b61ab685082b6f1e4696c0a8 not found: ID does not exist" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.842456 4772 scope.go:117] "RemoveContainer" containerID="43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0" Nov 22 12:34:39 crc kubenswrapper[4772]: E1122 12:34:39.843443 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0\": container with ID starting with 43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0 not found: ID does not exist" containerID="43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0" Nov 22 12:34:39 crc kubenswrapper[4772]: I1122 12:34:39.843471 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0"} err="failed to get container status \"43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0\": rpc error: code = NotFound desc = could not find container \"43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0\": container with ID starting with 43dc8cb400811caa6ec513bc6646e5c08e3eb52db474a15e037cf33cab6ea8d0 not found: ID does not exist" Nov 22 12:34:41 crc kubenswrapper[4772]: I1122 12:34:41.452894 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" path="/var/lib/kubelet/pods/f9396be5-4bb2-424a-a60c-1d1065393eb9/volumes" Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.532632 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.533280 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.533641 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.534602 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8158c00a0432c0447fd431172037b67d74a8cf1c2bbd496669c8a8e887acbee1"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.534665 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://8158c00a0432c0447fd431172037b67d74a8cf1c2bbd496669c8a8e887acbee1" gracePeriod=600 Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.956524 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="8158c00a0432c0447fd431172037b67d74a8cf1c2bbd496669c8a8e887acbee1" exitCode=0 Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.956663 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"8158c00a0432c0447fd431172037b67d74a8cf1c2bbd496669c8a8e887acbee1"} Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.956931 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621"} Nov 22 12:35:01 crc kubenswrapper[4772]: I1122 12:35:01.956982 4772 scope.go:117] "RemoveContainer" containerID="4061fbfbe12c0c8ed9afc045fe35fe5a4572628dd2f2f5a83b54d78da8972014" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.962582 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vqmq8"] Nov 22 12:35:06 crc kubenswrapper[4772]: E1122 12:35:06.964978 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="extract-utilities" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.965090 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="extract-utilities" Nov 22 12:35:06 crc kubenswrapper[4772]: E1122 12:35:06.965186 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="extract-content" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.965253 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="extract-content" Nov 22 12:35:06 crc kubenswrapper[4772]: E1122 12:35:06.965325 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="registry-server" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.965377 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="registry-server" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.965653 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9396be5-4bb2-424a-a60c-1d1065393eb9" containerName="registry-server" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.967310 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.988478 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqmq8"] Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.992002 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-catalog-content\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.992079 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-utilities\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:06 crc kubenswrapper[4772]: I1122 12:35:06.992372 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cqw\" (UniqueName: \"kubernetes.io/projected/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-kube-api-access-r7cqw\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.093327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-utilities\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.093462 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cqw\" (UniqueName: \"kubernetes.io/projected/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-kube-api-access-r7cqw\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.093542 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-catalog-content\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.094142 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-catalog-content\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.094151 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-utilities\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.116390 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cqw\" (UniqueName: \"kubernetes.io/projected/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-kube-api-access-r7cqw\") pod \"certified-operators-vqmq8\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.304673 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:07 crc kubenswrapper[4772]: I1122 12:35:07.900780 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqmq8"] Nov 22 12:35:07 crc kubenswrapper[4772]: W1122 12:35:07.909794 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230b5b13_08bb_41c3_8d18_6c7afe42c6e5.slice/crio-1692b1a22f3c83ee47c687d4c315ee0ed4035e8e60a3733dfdb297121dffe42f WatchSource:0}: Error finding container 1692b1a22f3c83ee47c687d4c315ee0ed4035e8e60a3733dfdb297121dffe42f: Status 404 returned error can't find the container with id 1692b1a22f3c83ee47c687d4c315ee0ed4035e8e60a3733dfdb297121dffe42f Nov 22 12:35:08 crc kubenswrapper[4772]: I1122 12:35:08.025610 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqmq8" event={"ID":"230b5b13-08bb-41c3-8d18-6c7afe42c6e5","Type":"ContainerStarted","Data":"1692b1a22f3c83ee47c687d4c315ee0ed4035e8e60a3733dfdb297121dffe42f"} Nov 22 12:35:09 crc kubenswrapper[4772]: I1122 12:35:09.038731 4772 generic.go:334] "Generic (PLEG): container finished" podID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerID="e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a" exitCode=0 Nov 22 12:35:09 crc kubenswrapper[4772]: I1122 12:35:09.038779 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqmq8" event={"ID":"230b5b13-08bb-41c3-8d18-6c7afe42c6e5","Type":"ContainerDied","Data":"e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a"} Nov 22 12:35:11 crc kubenswrapper[4772]: I1122 12:35:11.057698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqmq8" event={"ID":"230b5b13-08bb-41c3-8d18-6c7afe42c6e5","Type":"ContainerStarted","Data":"bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9"} Nov 22 12:35:13 crc kubenswrapper[4772]: I1122 12:35:13.086552 4772 generic.go:334] "Generic (PLEG): container finished" podID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerID="bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9" exitCode=0 Nov 22 12:35:13 crc kubenswrapper[4772]: I1122 12:35:13.086643 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqmq8" event={"ID":"230b5b13-08bb-41c3-8d18-6c7afe42c6e5","Type":"ContainerDied","Data":"bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9"} Nov 22 12:35:14 crc kubenswrapper[4772]: I1122 12:35:14.100807 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqmq8" event={"ID":"230b5b13-08bb-41c3-8d18-6c7afe42c6e5","Type":"ContainerStarted","Data":"2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0"} Nov 22 12:35:14 crc kubenswrapper[4772]: I1122 12:35:14.122942 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vqmq8" podStartSLOduration=3.674295659 podStartE2EDuration="8.122919065s" podCreationTimestamp="2025-11-22 12:35:06 +0000 UTC" firstStartedPulling="2025-11-22 12:35:09.042182567 +0000 UTC m=+7029.281627071" lastFinishedPulling="2025-11-22 12:35:13.490805983 +0000 UTC m=+7033.730250477" observedRunningTime="2025-11-22 12:35:14.117889191 +0000 UTC m=+7034.357333725" watchObservedRunningTime="2025-11-22 12:35:14.122919065 +0000 UTC m=+7034.362363559" Nov 22 12:35:17 crc kubenswrapper[4772]: I1122 12:35:17.305192 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:17 crc kubenswrapper[4772]: I1122 12:35:17.305937 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:17 crc kubenswrapper[4772]: I1122 12:35:17.387409 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:18 crc kubenswrapper[4772]: I1122 12:35:18.226414 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:18 crc kubenswrapper[4772]: I1122 12:35:18.277197 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqmq8"] Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.175498 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vqmq8" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="registry-server" containerID="cri-o://2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0" gracePeriod=2 Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.685659 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.842968 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7cqw\" (UniqueName: \"kubernetes.io/projected/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-kube-api-access-r7cqw\") pod \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.843231 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-utilities\") pod \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.843583 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-catalog-content\") pod \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\" (UID: \"230b5b13-08bb-41c3-8d18-6c7afe42c6e5\") " Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.844034 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-utilities" (OuterVolumeSpecName: "utilities") pod "230b5b13-08bb-41c3-8d18-6c7afe42c6e5" (UID: "230b5b13-08bb-41c3-8d18-6c7afe42c6e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.844536 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.855825 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-kube-api-access-r7cqw" (OuterVolumeSpecName: "kube-api-access-r7cqw") pod "230b5b13-08bb-41c3-8d18-6c7afe42c6e5" (UID: "230b5b13-08bb-41c3-8d18-6c7afe42c6e5"). InnerVolumeSpecName "kube-api-access-r7cqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.885982 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "230b5b13-08bb-41c3-8d18-6c7afe42c6e5" (UID: "230b5b13-08bb-41c3-8d18-6c7afe42c6e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.946575 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:20 crc kubenswrapper[4772]: I1122 12:35:20.946614 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7cqw\" (UniqueName: \"kubernetes.io/projected/230b5b13-08bb-41c3-8d18-6c7afe42c6e5-kube-api-access-r7cqw\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.189978 4772 generic.go:334] "Generic (PLEG): container finished" podID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerID="2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0" exitCode=0 Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.190073 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqmq8" event={"ID":"230b5b13-08bb-41c3-8d18-6c7afe42c6e5","Type":"ContainerDied","Data":"2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0"} Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.190121 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqmq8" event={"ID":"230b5b13-08bb-41c3-8d18-6c7afe42c6e5","Type":"ContainerDied","Data":"1692b1a22f3c83ee47c687d4c315ee0ed4035e8e60a3733dfdb297121dffe42f"} Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.190151 4772 scope.go:117] "RemoveContainer" containerID="2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.190398 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqmq8" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.233888 4772 scope.go:117] "RemoveContainer" containerID="bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.246984 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqmq8"] Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.270688 4772 scope.go:117] "RemoveContainer" containerID="e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.274313 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vqmq8"] Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.335781 4772 scope.go:117] "RemoveContainer" containerID="2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0" Nov 22 12:35:21 crc kubenswrapper[4772]: E1122 12:35:21.336409 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0\": container with ID starting with 2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0 not found: ID does not exist" containerID="2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.336464 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0"} err="failed to get container status \"2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0\": rpc error: code = NotFound desc = could not find container \"2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0\": container with ID starting with 2b02d5508a46b29894d7261da7353382543c3e533a3f10530b38dc2b467b9fc0 not found: ID does not exist" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.336498 4772 scope.go:117] "RemoveContainer" containerID="bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9" Nov 22 12:35:21 crc kubenswrapper[4772]: E1122 12:35:21.336906 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9\": container with ID starting with bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9 not found: ID does not exist" containerID="bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.336953 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9"} err="failed to get container status \"bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9\": rpc error: code = NotFound desc = could not find container \"bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9\": container with ID starting with bb97a436288714a5cbd71407d1f74645db9897733b24fd9b0807cb442e5246f9 not found: ID does not exist" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.336989 4772 scope.go:117] "RemoveContainer" containerID="e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a" Nov 22 12:35:21 crc kubenswrapper[4772]: E1122 12:35:21.337377 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a\": container with ID starting with e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a not found: ID does not exist" containerID="e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.337408 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a"} err="failed to get container status \"e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a\": rpc error: code = NotFound desc = could not find container \"e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a\": container with ID starting with e6504b4edc5bbdc5b20dc620977ccc65b25164ff23245d4f54681b904a04a71a not found: ID does not exist" Nov 22 12:35:21 crc kubenswrapper[4772]: I1122 12:35:21.429282 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" path="/var/lib/kubelet/pods/230b5b13-08bb-41c3-8d18-6c7afe42c6e5/volumes" Nov 22 12:35:52 crc kubenswrapper[4772]: I1122 12:35:52.567472 4772 generic.go:334] "Generic (PLEG): container finished" podID="28d046d2-0d1c-4187-8d76-14d0004ec8e2" containerID="1f6f9d5308533f45f1a3caf9c1fc4fd621455e685ce60593f38693cfe4b2e770" exitCode=0 Nov 22 12:35:52 crc kubenswrapper[4772]: I1122 12:35:52.567588 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" event={"ID":"28d046d2-0d1c-4187-8d76-14d0004ec8e2","Type":"ContainerDied","Data":"1f6f9d5308533f45f1a3caf9c1fc4fd621455e685ce60593f38693cfe4b2e770"} Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.129814 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.265422 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v4mc\" (UniqueName: \"kubernetes.io/projected/28d046d2-0d1c-4187-8d76-14d0004ec8e2-kube-api-access-9v4mc\") pod \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.265603 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ssh-key\") pod \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.265698 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-tripleo-cleanup-combined-ca-bundle\") pod \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.265719 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ceph\") pod \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.265894 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-inventory\") pod \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\" (UID: \"28d046d2-0d1c-4187-8d76-14d0004ec8e2\") " Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.275115 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ceph" (OuterVolumeSpecName: "ceph") pod "28d046d2-0d1c-4187-8d76-14d0004ec8e2" (UID: "28d046d2-0d1c-4187-8d76-14d0004ec8e2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.275234 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d046d2-0d1c-4187-8d76-14d0004ec8e2-kube-api-access-9v4mc" (OuterVolumeSpecName: "kube-api-access-9v4mc") pod "28d046d2-0d1c-4187-8d76-14d0004ec8e2" (UID: "28d046d2-0d1c-4187-8d76-14d0004ec8e2"). InnerVolumeSpecName "kube-api-access-9v4mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.278301 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "28d046d2-0d1c-4187-8d76-14d0004ec8e2" (UID: "28d046d2-0d1c-4187-8d76-14d0004ec8e2"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.309444 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-inventory" (OuterVolumeSpecName: "inventory") pod "28d046d2-0d1c-4187-8d76-14d0004ec8e2" (UID: "28d046d2-0d1c-4187-8d76-14d0004ec8e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.309575 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "28d046d2-0d1c-4187-8d76-14d0004ec8e2" (UID: "28d046d2-0d1c-4187-8d76-14d0004ec8e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.369329 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.369388 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v4mc\" (UniqueName: \"kubernetes.io/projected/28d046d2-0d1c-4187-8d76-14d0004ec8e2-kube-api-access-9v4mc\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.369403 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.369416 4772 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.369427 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d046d2-0d1c-4187-8d76-14d0004ec8e2-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.596855 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" event={"ID":"28d046d2-0d1c-4187-8d76-14d0004ec8e2","Type":"ContainerDied","Data":"aefbac7944a6b5f605b6efffbe36173ce4543b7dec697d09e333fdc34b8c8a15"} Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.596925 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aefbac7944a6b5f605b6efffbe36173ce4543b7dec697d09e333fdc34b8c8a15" Nov 22 12:35:54 crc kubenswrapper[4772]: I1122 12:35:54.597014 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.374796 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-pqq5v"] Nov 22 12:36:00 crc kubenswrapper[4772]: E1122 12:36:00.384989 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d046d2-0d1c-4187-8d76-14d0004ec8e2" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.385386 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d046d2-0d1c-4187-8d76-14d0004ec8e2" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 22 12:36:00 crc kubenswrapper[4772]: E1122 12:36:00.385504 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="registry-server" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.385526 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="registry-server" Nov 22 12:36:00 crc kubenswrapper[4772]: E1122 12:36:00.385556 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="extract-content" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.385569 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="extract-content" Nov 22 12:36:00 crc kubenswrapper[4772]: E1122 12:36:00.385616 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="extract-utilities" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.385626 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="extract-utilities" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.386771 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d046d2-0d1c-4187-8d76-14d0004ec8e2" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.387695 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="230b5b13-08bb-41c3-8d18-6c7afe42c6e5" containerName="registry-server" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.390135 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.393585 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.393609 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.393627 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.393592 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.405534 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-pqq5v"] Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.516883 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.516932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ceph\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.517423 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtklg\" (UniqueName: \"kubernetes.io/projected/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-kube-api-access-dtklg\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.518221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.518397 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-inventory\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.620744 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-inventory\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.620893 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.620927 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ceph\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.620964 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtklg\" (UniqueName: \"kubernetes.io/projected/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-kube-api-access-dtklg\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.621061 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.627154 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.627686 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.633613 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ceph\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.635300 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-inventory\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.641909 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtklg\" (UniqueName: \"kubernetes.io/projected/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-kube-api-access-dtklg\") pod \"bootstrap-openstack-openstack-cell1-pqq5v\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:00 crc kubenswrapper[4772]: I1122 12:36:00.716379 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:36:01 crc kubenswrapper[4772]: I1122 12:36:01.269251 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-pqq5v"] Nov 22 12:36:01 crc kubenswrapper[4772]: I1122 12:36:01.664457 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" event={"ID":"1933b73e-e00d-490d-b8d0-26eab9d3b9a8","Type":"ContainerStarted","Data":"596cf3b1099d6825bce5c1ae67646317c26768e3eebc6f15fb542bf4250f269b"} Nov 22 12:36:01 crc kubenswrapper[4772]: I1122 12:36:01.955164 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:36:02 crc kubenswrapper[4772]: I1122 12:36:02.677577 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" event={"ID":"1933b73e-e00d-490d-b8d0-26eab9d3b9a8","Type":"ContainerStarted","Data":"952a034862804db5413ec9e6220bdd08c2dbe6d3e5c35bae0c44face58edb76d"} Nov 22 12:36:02 crc kubenswrapper[4772]: I1122 12:36:02.696904 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" podStartSLOduration=2.021237675 podStartE2EDuration="2.696884242s" podCreationTimestamp="2025-11-22 12:36:00 +0000 UTC" firstStartedPulling="2025-11-22 12:36:01.276692977 +0000 UTC m=+7081.516137471" lastFinishedPulling="2025-11-22 12:36:01.952339544 +0000 UTC m=+7082.191784038" observedRunningTime="2025-11-22 12:36:02.693304623 +0000 UTC m=+7082.932749137" watchObservedRunningTime="2025-11-22 12:36:02.696884242 +0000 UTC m=+7082.936328736" Nov 22 12:37:01 crc kubenswrapper[4772]: I1122 12:37:01.532788 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:37:01 crc kubenswrapper[4772]: I1122 12:37:01.533350 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:37:31 crc kubenswrapper[4772]: I1122 12:37:31.533252 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:37:31 crc kubenswrapper[4772]: I1122 12:37:31.533799 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:38:01 crc kubenswrapper[4772]: I1122 12:38:01.533778 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:38:01 crc kubenswrapper[4772]: I1122 12:38:01.534455 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:38:01 crc kubenswrapper[4772]: I1122 12:38:01.534521 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:38:01 crc kubenswrapper[4772]: I1122 12:38:01.535660 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:38:01 crc kubenswrapper[4772]: I1122 12:38:01.535744 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" gracePeriod=600 Nov 22 12:38:01 crc kubenswrapper[4772]: E1122 12:38:01.662225 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:38:02 crc kubenswrapper[4772]: I1122 12:38:02.024980 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" exitCode=0 Nov 22 12:38:02 crc kubenswrapper[4772]: I1122 12:38:02.025038 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621"} Nov 22 12:38:02 crc kubenswrapper[4772]: I1122 12:38:02.025160 4772 scope.go:117] "RemoveContainer" containerID="8158c00a0432c0447fd431172037b67d74a8cf1c2bbd496669c8a8e887acbee1" Nov 22 12:38:02 crc kubenswrapper[4772]: I1122 12:38:02.026329 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:38:02 crc kubenswrapper[4772]: E1122 12:38:02.027136 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:38:13 crc kubenswrapper[4772]: I1122 12:38:13.417110 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:38:13 crc kubenswrapper[4772]: E1122 12:38:13.418357 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:38:26 crc kubenswrapper[4772]: I1122 12:38:26.414261 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:38:26 crc kubenswrapper[4772]: E1122 12:38:26.415515 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.038848 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jx272"] Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.041727 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.052105 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jx272"] Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.200827 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-utilities\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.200981 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9chnx\" (UniqueName: \"kubernetes.io/projected/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-kube-api-access-9chnx\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.201007 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-catalog-content\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.302918 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-utilities\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.303087 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9chnx\" (UniqueName: \"kubernetes.io/projected/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-kube-api-access-9chnx\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.303116 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-catalog-content\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.303581 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-catalog-content\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.303610 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-utilities\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.335474 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9chnx\" (UniqueName: \"kubernetes.io/projected/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-kube-api-access-9chnx\") pod \"community-operators-jx272\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.376415 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:36 crc kubenswrapper[4772]: I1122 12:38:36.941322 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jx272"] Nov 22 12:38:37 crc kubenswrapper[4772]: I1122 12:38:37.495647 4772 generic.go:334] "Generic (PLEG): container finished" podID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerID="fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15" exitCode=0 Nov 22 12:38:37 crc kubenswrapper[4772]: I1122 12:38:37.495839 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx272" event={"ID":"72936df9-6aa1-4bda-89c7-ab0012a1a6a6","Type":"ContainerDied","Data":"fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15"} Nov 22 12:38:37 crc kubenswrapper[4772]: I1122 12:38:37.496334 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx272" event={"ID":"72936df9-6aa1-4bda-89c7-ab0012a1a6a6","Type":"ContainerStarted","Data":"890070ecd37f7dddc3f0bb6c716a82432d3987488436ba45434fc626402b75c5"} Nov 22 12:38:38 crc kubenswrapper[4772]: I1122 12:38:38.507873 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx272" event={"ID":"72936df9-6aa1-4bda-89c7-ab0012a1a6a6","Type":"ContainerStarted","Data":"656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547"} Nov 22 12:38:40 crc kubenswrapper[4772]: I1122 12:38:40.413888 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:38:40 crc kubenswrapper[4772]: E1122 12:38:40.415711 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:38:41 crc kubenswrapper[4772]: I1122 12:38:41.539784 4772 generic.go:334] "Generic (PLEG): container finished" podID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerID="656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547" exitCode=0 Nov 22 12:38:41 crc kubenswrapper[4772]: I1122 12:38:41.539831 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx272" event={"ID":"72936df9-6aa1-4bda-89c7-ab0012a1a6a6","Type":"ContainerDied","Data":"656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547"} Nov 22 12:38:42 crc kubenswrapper[4772]: I1122 12:38:42.554760 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx272" event={"ID":"72936df9-6aa1-4bda-89c7-ab0012a1a6a6","Type":"ContainerStarted","Data":"ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1"} Nov 22 12:38:42 crc kubenswrapper[4772]: I1122 12:38:42.579216 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jx272" podStartSLOduration=2.128029535 podStartE2EDuration="6.579186712s" podCreationTimestamp="2025-11-22 12:38:36 +0000 UTC" firstStartedPulling="2025-11-22 12:38:37.498355743 +0000 UTC m=+7237.737800237" lastFinishedPulling="2025-11-22 12:38:41.94951292 +0000 UTC m=+7242.188957414" observedRunningTime="2025-11-22 12:38:42.571547824 +0000 UTC m=+7242.810992318" watchObservedRunningTime="2025-11-22 12:38:42.579186712 +0000 UTC m=+7242.818631206" Nov 22 12:38:46 crc kubenswrapper[4772]: I1122 12:38:46.376935 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:46 crc kubenswrapper[4772]: I1122 12:38:46.377680 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:46 crc kubenswrapper[4772]: I1122 12:38:46.457954 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:55 crc kubenswrapper[4772]: I1122 12:38:55.414902 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:38:55 crc kubenswrapper[4772]: E1122 12:38:55.415567 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:38:56 crc kubenswrapper[4772]: I1122 12:38:56.428374 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:56 crc kubenswrapper[4772]: I1122 12:38:56.483228 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jx272"] Nov 22 12:38:56 crc kubenswrapper[4772]: I1122 12:38:56.695944 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jx272" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="registry-server" containerID="cri-o://ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1" gracePeriod=2 Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.225514 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.333788 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9chnx\" (UniqueName: \"kubernetes.io/projected/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-kube-api-access-9chnx\") pod \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.333979 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-utilities\") pod \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.334014 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-catalog-content\") pod \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\" (UID: \"72936df9-6aa1-4bda-89c7-ab0012a1a6a6\") " Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.334883 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-utilities" (OuterVolumeSpecName: "utilities") pod "72936df9-6aa1-4bda-89c7-ab0012a1a6a6" (UID: "72936df9-6aa1-4bda-89c7-ab0012a1a6a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.342959 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-kube-api-access-9chnx" (OuterVolumeSpecName: "kube-api-access-9chnx") pod "72936df9-6aa1-4bda-89c7-ab0012a1a6a6" (UID: "72936df9-6aa1-4bda-89c7-ab0012a1a6a6"). InnerVolumeSpecName "kube-api-access-9chnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.402799 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72936df9-6aa1-4bda-89c7-ab0012a1a6a6" (UID: "72936df9-6aa1-4bda-89c7-ab0012a1a6a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.436903 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9chnx\" (UniqueName: \"kubernetes.io/projected/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-kube-api-access-9chnx\") on node \"crc\" DevicePath \"\"" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.436948 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.436960 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72936df9-6aa1-4bda-89c7-ab0012a1a6a6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.709870 4772 generic.go:334] "Generic (PLEG): container finished" podID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerID="ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1" exitCode=0 Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.709913 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx272" event={"ID":"72936df9-6aa1-4bda-89c7-ab0012a1a6a6","Type":"ContainerDied","Data":"ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1"} Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.709941 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx272" event={"ID":"72936df9-6aa1-4bda-89c7-ab0012a1a6a6","Type":"ContainerDied","Data":"890070ecd37f7dddc3f0bb6c716a82432d3987488436ba45434fc626402b75c5"} Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.709961 4772 scope.go:117] "RemoveContainer" containerID="ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.710023 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx272" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.732609 4772 scope.go:117] "RemoveContainer" containerID="656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.743897 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jx272"] Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.752737 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jx272"] Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.753221 4772 scope.go:117] "RemoveContainer" containerID="fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.815554 4772 scope.go:117] "RemoveContainer" containerID="ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1" Nov 22 12:38:57 crc kubenswrapper[4772]: E1122 12:38:57.815976 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1\": container with ID starting with ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1 not found: ID does not exist" containerID="ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.816021 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1"} err="failed to get container status \"ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1\": rpc error: code = NotFound desc = could not find container \"ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1\": container with ID starting with ee17bd6b87ec1b0b8ffca8ffec9b9c597938dc02cdff37934b1b26deff637fe1 not found: ID does not exist" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.816082 4772 scope.go:117] "RemoveContainer" containerID="656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547" Nov 22 12:38:57 crc kubenswrapper[4772]: E1122 12:38:57.816608 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547\": container with ID starting with 656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547 not found: ID does not exist" containerID="656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.816641 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547"} err="failed to get container status \"656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547\": rpc error: code = NotFound desc = could not find container \"656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547\": container with ID starting with 656a429589d4f13e0fbd26a8278cefe9286aeee9f3e58d296e32e78bd225a547 not found: ID does not exist" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.816666 4772 scope.go:117] "RemoveContainer" containerID="fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15" Nov 22 12:38:57 crc kubenswrapper[4772]: E1122 12:38:57.816988 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15\": container with ID starting with fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15 not found: ID does not exist" containerID="fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15" Nov 22 12:38:57 crc kubenswrapper[4772]: I1122 12:38:57.817039 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15"} err="failed to get container status \"fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15\": rpc error: code = NotFound desc = could not find container \"fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15\": container with ID starting with fee64f9b9609270ae2157ba1a83f7fc7db4d5b06eae5e28df66ae713925ccd15 not found: ID does not exist" Nov 22 12:38:59 crc kubenswrapper[4772]: I1122 12:38:59.425612 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" path="/var/lib/kubelet/pods/72936df9-6aa1-4bda-89c7-ab0012a1a6a6/volumes" Nov 22 12:39:06 crc kubenswrapper[4772]: I1122 12:39:06.416133 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:39:06 crc kubenswrapper[4772]: E1122 12:39:06.418414 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:39:06 crc kubenswrapper[4772]: I1122 12:39:06.821569 4772 generic.go:334] "Generic (PLEG): container finished" podID="1933b73e-e00d-490d-b8d0-26eab9d3b9a8" containerID="952a034862804db5413ec9e6220bdd08c2dbe6d3e5c35bae0c44face58edb76d" exitCode=0 Nov 22 12:39:06 crc kubenswrapper[4772]: I1122 12:39:06.821714 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" event={"ID":"1933b73e-e00d-490d-b8d0-26eab9d3b9a8","Type":"ContainerDied","Data":"952a034862804db5413ec9e6220bdd08c2dbe6d3e5c35bae0c44face58edb76d"} Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.413116 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.526409 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-bootstrap-combined-ca-bundle\") pod \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.526480 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ceph\") pod \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.526660 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-inventory\") pod \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.526697 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtklg\" (UniqueName: \"kubernetes.io/projected/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-kube-api-access-dtklg\") pod \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.526743 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ssh-key\") pod \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\" (UID: \"1933b73e-e00d-490d-b8d0-26eab9d3b9a8\") " Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.534502 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1933b73e-e00d-490d-b8d0-26eab9d3b9a8" (UID: "1933b73e-e00d-490d-b8d0-26eab9d3b9a8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.535585 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ceph" (OuterVolumeSpecName: "ceph") pod "1933b73e-e00d-490d-b8d0-26eab9d3b9a8" (UID: "1933b73e-e00d-490d-b8d0-26eab9d3b9a8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.536123 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-kube-api-access-dtklg" (OuterVolumeSpecName: "kube-api-access-dtklg") pod "1933b73e-e00d-490d-b8d0-26eab9d3b9a8" (UID: "1933b73e-e00d-490d-b8d0-26eab9d3b9a8"). InnerVolumeSpecName "kube-api-access-dtklg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.560782 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1933b73e-e00d-490d-b8d0-26eab9d3b9a8" (UID: "1933b73e-e00d-490d-b8d0-26eab9d3b9a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.568354 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-inventory" (OuterVolumeSpecName: "inventory") pod "1933b73e-e00d-490d-b8d0-26eab9d3b9a8" (UID: "1933b73e-e00d-490d-b8d0-26eab9d3b9a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.633219 4772 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.633306 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.633351 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.633368 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtklg\" (UniqueName: \"kubernetes.io/projected/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-kube-api-access-dtklg\") on node \"crc\" DevicePath \"\"" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.633380 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1933b73e-e00d-490d-b8d0-26eab9d3b9a8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.844383 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" event={"ID":"1933b73e-e00d-490d-b8d0-26eab9d3b9a8","Type":"ContainerDied","Data":"596cf3b1099d6825bce5c1ae67646317c26768e3eebc6f15fb542bf4250f269b"} Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.844449 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-pqq5v" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.844472 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="596cf3b1099d6825bce5c1ae67646317c26768e3eebc6f15fb542bf4250f269b" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.946968 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-b9cpv"] Nov 22 12:39:08 crc kubenswrapper[4772]: E1122 12:39:08.947443 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="extract-content" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.947466 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="extract-content" Nov 22 12:39:08 crc kubenswrapper[4772]: E1122 12:39:08.947495 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="extract-utilities" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.947504 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="extract-utilities" Nov 22 12:39:08 crc kubenswrapper[4772]: E1122 12:39:08.947518 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1933b73e-e00d-490d-b8d0-26eab9d3b9a8" containerName="bootstrap-openstack-openstack-cell1" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.947525 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1933b73e-e00d-490d-b8d0-26eab9d3b9a8" containerName="bootstrap-openstack-openstack-cell1" Nov 22 12:39:08 crc kubenswrapper[4772]: E1122 12:39:08.947557 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="registry-server" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.947563 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="registry-server" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.947777 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1933b73e-e00d-490d-b8d0-26eab9d3b9a8" containerName="bootstrap-openstack-openstack-cell1" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.947797 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="72936df9-6aa1-4bda-89c7-ab0012a1a6a6" containerName="registry-server" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.948634 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.951625 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.951969 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.952104 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.952413 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:39:08 crc kubenswrapper[4772]: I1122 12:39:08.971609 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-b9cpv"] Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.043978 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st29s\" (UniqueName: \"kubernetes.io/projected/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-kube-api-access-st29s\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.044184 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ceph\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.044212 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ssh-key\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.044345 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-inventory\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.146133 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ceph\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.146182 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ssh-key\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.146328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-inventory\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.146374 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st29s\" (UniqueName: \"kubernetes.io/projected/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-kube-api-access-st29s\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.151066 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-inventory\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.151104 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ceph\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.151133 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ssh-key\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.166810 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st29s\" (UniqueName: \"kubernetes.io/projected/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-kube-api-access-st29s\") pod \"download-cache-openstack-openstack-cell1-b9cpv\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.276587 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:39:09 crc kubenswrapper[4772]: I1122 12:39:09.938410 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-b9cpv"] Nov 22 12:39:09 crc kubenswrapper[4772]: W1122 12:39:09.949309 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdd3bebe_c9c8_4f42_8e6d_7f85806cdde9.slice/crio-b5603740dde4c3e686e358ff067e015a3a5325758ba12c9084a3c0fd693adbf7 WatchSource:0}: Error finding container b5603740dde4c3e686e358ff067e015a3a5325758ba12c9084a3c0fd693adbf7: Status 404 returned error can't find the container with id b5603740dde4c3e686e358ff067e015a3a5325758ba12c9084a3c0fd693adbf7 Nov 22 12:39:10 crc kubenswrapper[4772]: I1122 12:39:10.877813 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" event={"ID":"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9","Type":"ContainerStarted","Data":"b5603740dde4c3e686e358ff067e015a3a5325758ba12c9084a3c0fd693adbf7"} Nov 22 12:39:11 crc kubenswrapper[4772]: I1122 12:39:11.891530 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" event={"ID":"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9","Type":"ContainerStarted","Data":"7424d5ff62a020c6b0d14262d1bbbf0222721ba4bbda94cd0fc953ff6d78e7bf"} Nov 22 12:39:11 crc kubenswrapper[4772]: I1122 12:39:11.918081 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" podStartSLOduration=2.554573877 podStartE2EDuration="3.918062332s" podCreationTimestamp="2025-11-22 12:39:08 +0000 UTC" firstStartedPulling="2025-11-22 12:39:09.954209378 +0000 UTC m=+7270.193653882" lastFinishedPulling="2025-11-22 12:39:11.317697813 +0000 UTC m=+7271.557142337" observedRunningTime="2025-11-22 12:39:11.912482704 +0000 UTC m=+7272.151927208" watchObservedRunningTime="2025-11-22 12:39:11.918062332 +0000 UTC m=+7272.157506826" Nov 22 12:39:17 crc kubenswrapper[4772]: I1122 12:39:17.413355 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:39:17 crc kubenswrapper[4772]: E1122 12:39:17.414108 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:39:29 crc kubenswrapper[4772]: I1122 12:39:29.413781 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:39:29 crc kubenswrapper[4772]: E1122 12:39:29.414623 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:39:44 crc kubenswrapper[4772]: I1122 12:39:44.414388 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:39:44 crc kubenswrapper[4772]: E1122 12:39:44.415239 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.596014 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5wp7v"] Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.602904 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.630259 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wp7v"] Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.720870 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-catalog-content\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.720958 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-utilities\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.721027 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/567c61e1-04d4-4a45-ad93-79b14324099b-kube-api-access-9s82l\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.822857 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-catalog-content\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.823229 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-utilities\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.823428 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/567c61e1-04d4-4a45-ad93-79b14324099b-kube-api-access-9s82l\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.823762 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-catalog-content\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.823799 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-utilities\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.841200 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/567c61e1-04d4-4a45-ad93-79b14324099b-kube-api-access-9s82l\") pod \"redhat-operators-5wp7v\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:49 crc kubenswrapper[4772]: I1122 12:39:49.935546 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:50 crc kubenswrapper[4772]: I1122 12:39:50.546102 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wp7v"] Nov 22 12:39:51 crc kubenswrapper[4772]: I1122 12:39:51.363457 4772 generic.go:334] "Generic (PLEG): container finished" podID="567c61e1-04d4-4a45-ad93-79b14324099b" containerID="a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200" exitCode=0 Nov 22 12:39:51 crc kubenswrapper[4772]: I1122 12:39:51.363538 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wp7v" event={"ID":"567c61e1-04d4-4a45-ad93-79b14324099b","Type":"ContainerDied","Data":"a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200"} Nov 22 12:39:51 crc kubenswrapper[4772]: I1122 12:39:51.364033 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wp7v" event={"ID":"567c61e1-04d4-4a45-ad93-79b14324099b","Type":"ContainerStarted","Data":"37b7886f4efbe50a24191bf25564dec7dc08f6b7ac88d00f7d01d0029e732656"} Nov 22 12:39:51 crc kubenswrapper[4772]: I1122 12:39:51.367315 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:39:52 crc kubenswrapper[4772]: I1122 12:39:52.376555 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wp7v" event={"ID":"567c61e1-04d4-4a45-ad93-79b14324099b","Type":"ContainerStarted","Data":"97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb"} Nov 22 12:39:55 crc kubenswrapper[4772]: I1122 12:39:55.485695 4772 generic.go:334] "Generic (PLEG): container finished" podID="567c61e1-04d4-4a45-ad93-79b14324099b" containerID="97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb" exitCode=0 Nov 22 12:39:55 crc kubenswrapper[4772]: I1122 12:39:55.487613 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wp7v" event={"ID":"567c61e1-04d4-4a45-ad93-79b14324099b","Type":"ContainerDied","Data":"97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb"} Nov 22 12:39:57 crc kubenswrapper[4772]: I1122 12:39:57.512729 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wp7v" event={"ID":"567c61e1-04d4-4a45-ad93-79b14324099b","Type":"ContainerStarted","Data":"0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987"} Nov 22 12:39:57 crc kubenswrapper[4772]: I1122 12:39:57.533468 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5wp7v" podStartSLOduration=3.966684091 podStartE2EDuration="8.533444192s" podCreationTimestamp="2025-11-22 12:39:49 +0000 UTC" firstStartedPulling="2025-11-22 12:39:51.367006167 +0000 UTC m=+7311.606450661" lastFinishedPulling="2025-11-22 12:39:55.933766258 +0000 UTC m=+7316.173210762" observedRunningTime="2025-11-22 12:39:57.53172403 +0000 UTC m=+7317.771168534" watchObservedRunningTime="2025-11-22 12:39:57.533444192 +0000 UTC m=+7317.772888686" Nov 22 12:39:59 crc kubenswrapper[4772]: I1122 12:39:59.414190 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:39:59 crc kubenswrapper[4772]: E1122 12:39:59.414984 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:39:59 crc kubenswrapper[4772]: I1122 12:39:59.935692 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:39:59 crc kubenswrapper[4772]: I1122 12:39:59.936102 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:40:00 crc kubenswrapper[4772]: I1122 12:40:00.981902 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5wp7v" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="registry-server" probeResult="failure" output=< Nov 22 12:40:00 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 12:40:00 crc kubenswrapper[4772]: > Nov 22 12:40:10 crc kubenswrapper[4772]: I1122 12:40:10.014708 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:40:10 crc kubenswrapper[4772]: I1122 12:40:10.067313 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:40:10 crc kubenswrapper[4772]: I1122 12:40:10.260063 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wp7v"] Nov 22 12:40:11 crc kubenswrapper[4772]: I1122 12:40:11.648621 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5wp7v" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="registry-server" containerID="cri-o://0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987" gracePeriod=2 Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.239530 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.322935 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-utilities\") pod \"567c61e1-04d4-4a45-ad93-79b14324099b\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.322998 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/567c61e1-04d4-4a45-ad93-79b14324099b-kube-api-access-9s82l\") pod \"567c61e1-04d4-4a45-ad93-79b14324099b\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.323033 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-catalog-content\") pod \"567c61e1-04d4-4a45-ad93-79b14324099b\" (UID: \"567c61e1-04d4-4a45-ad93-79b14324099b\") " Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.324347 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-utilities" (OuterVolumeSpecName: "utilities") pod "567c61e1-04d4-4a45-ad93-79b14324099b" (UID: "567c61e1-04d4-4a45-ad93-79b14324099b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.328495 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.330326 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567c61e1-04d4-4a45-ad93-79b14324099b-kube-api-access-9s82l" (OuterVolumeSpecName: "kube-api-access-9s82l") pod "567c61e1-04d4-4a45-ad93-79b14324099b" (UID: "567c61e1-04d4-4a45-ad93-79b14324099b"). InnerVolumeSpecName "kube-api-access-9s82l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.415997 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "567c61e1-04d4-4a45-ad93-79b14324099b" (UID: "567c61e1-04d4-4a45-ad93-79b14324099b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.431612 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s82l\" (UniqueName: \"kubernetes.io/projected/567c61e1-04d4-4a45-ad93-79b14324099b-kube-api-access-9s82l\") on node \"crc\" DevicePath \"\"" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.431991 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567c61e1-04d4-4a45-ad93-79b14324099b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.673659 4772 generic.go:334] "Generic (PLEG): container finished" podID="567c61e1-04d4-4a45-ad93-79b14324099b" containerID="0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987" exitCode=0 Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.673695 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wp7v" event={"ID":"567c61e1-04d4-4a45-ad93-79b14324099b","Type":"ContainerDied","Data":"0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987"} Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.674025 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wp7v" event={"ID":"567c61e1-04d4-4a45-ad93-79b14324099b","Type":"ContainerDied","Data":"37b7886f4efbe50a24191bf25564dec7dc08f6b7ac88d00f7d01d0029e732656"} Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.673789 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wp7v" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.674120 4772 scope.go:117] "RemoveContainer" containerID="0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.702972 4772 scope.go:117] "RemoveContainer" containerID="97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.709356 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wp7v"] Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.717771 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5wp7v"] Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.728531 4772 scope.go:117] "RemoveContainer" containerID="a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.774672 4772 scope.go:117] "RemoveContainer" containerID="0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987" Nov 22 12:40:12 crc kubenswrapper[4772]: E1122 12:40:12.775380 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987\": container with ID starting with 0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987 not found: ID does not exist" containerID="0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.775431 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987"} err="failed to get container status \"0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987\": rpc error: code = NotFound desc = could not find container \"0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987\": container with ID starting with 0824be5774dee2c5b74c54deed5483b02b918e03b455ebc32be993a7ac761987 not found: ID does not exist" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.775460 4772 scope.go:117] "RemoveContainer" containerID="97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb" Nov 22 12:40:12 crc kubenswrapper[4772]: E1122 12:40:12.775832 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb\": container with ID starting with 97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb not found: ID does not exist" containerID="97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.775902 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb"} err="failed to get container status \"97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb\": rpc error: code = NotFound desc = could not find container \"97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb\": container with ID starting with 97fb9199802b5e87d093275214c797b1b76f83606f2ecabe9d529db895d87aeb not found: ID does not exist" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.775941 4772 scope.go:117] "RemoveContainer" containerID="a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200" Nov 22 12:40:12 crc kubenswrapper[4772]: E1122 12:40:12.776341 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200\": container with ID starting with a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200 not found: ID does not exist" containerID="a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200" Nov 22 12:40:12 crc kubenswrapper[4772]: I1122 12:40:12.776370 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200"} err="failed to get container status \"a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200\": rpc error: code = NotFound desc = could not find container \"a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200\": container with ID starting with a300151a4ce84676eb6caaecfc9b4c8f340e75f573d36398210df2402231a200 not found: ID does not exist" Nov 22 12:40:13 crc kubenswrapper[4772]: I1122 12:40:13.414281 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:40:13 crc kubenswrapper[4772]: E1122 12:40:13.414570 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:40:13 crc kubenswrapper[4772]: I1122 12:40:13.432379 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" path="/var/lib/kubelet/pods/567c61e1-04d4-4a45-ad93-79b14324099b/volumes" Nov 22 12:40:26 crc kubenswrapper[4772]: I1122 12:40:26.415202 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:40:26 crc kubenswrapper[4772]: E1122 12:40:26.416752 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:40:37 crc kubenswrapper[4772]: I1122 12:40:37.416545 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:40:37 crc kubenswrapper[4772]: E1122 12:40:37.418387 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:40:44 crc kubenswrapper[4772]: I1122 12:40:44.020234 4772 generic.go:334] "Generic (PLEG): container finished" podID="bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" containerID="7424d5ff62a020c6b0d14262d1bbbf0222721ba4bbda94cd0fc953ff6d78e7bf" exitCode=0 Nov 22 12:40:44 crc kubenswrapper[4772]: I1122 12:40:44.020381 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" event={"ID":"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9","Type":"ContainerDied","Data":"7424d5ff62a020c6b0d14262d1bbbf0222721ba4bbda94cd0fc953ff6d78e7bf"} Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.584030 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.717195 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st29s\" (UniqueName: \"kubernetes.io/projected/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-kube-api-access-st29s\") pod \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.717407 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ceph\") pod \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.717627 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ssh-key\") pod \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.717693 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-inventory\") pod \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\" (UID: \"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9\") " Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.734357 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ceph" (OuterVolumeSpecName: "ceph") pod "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" (UID: "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.741628 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-kube-api-access-st29s" (OuterVolumeSpecName: "kube-api-access-st29s") pod "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" (UID: "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9"). InnerVolumeSpecName "kube-api-access-st29s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.761171 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" (UID: "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.763272 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-inventory" (OuterVolumeSpecName: "inventory") pod "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" (UID: "bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.822408 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.822827 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.823629 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st29s\" (UniqueName: \"kubernetes.io/projected/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-kube-api-access-st29s\") on node \"crc\" DevicePath \"\"" Nov 22 12:40:45 crc kubenswrapper[4772]: I1122 12:40:45.823656 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.047848 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" event={"ID":"bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9","Type":"ContainerDied","Data":"b5603740dde4c3e686e358ff067e015a3a5325758ba12c9084a3c0fd693adbf7"} Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.047888 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5603740dde4c3e686e358ff067e015a3a5325758ba12c9084a3c0fd693adbf7" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.047961 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-b9cpv" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.141495 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-swxh9"] Nov 22 12:40:46 crc kubenswrapper[4772]: E1122 12:40:46.141991 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" containerName="download-cache-openstack-openstack-cell1" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.142014 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" containerName="download-cache-openstack-openstack-cell1" Nov 22 12:40:46 crc kubenswrapper[4772]: E1122 12:40:46.142030 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="registry-server" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.142039 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="registry-server" Nov 22 12:40:46 crc kubenswrapper[4772]: E1122 12:40:46.142235 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="extract-content" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.142243 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="extract-content" Nov 22 12:40:46 crc kubenswrapper[4772]: E1122 12:40:46.142267 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="extract-utilities" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.142273 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="extract-utilities" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.142524 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9" containerName="download-cache-openstack-openstack-cell1" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.142549 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="567c61e1-04d4-4a45-ad93-79b14324099b" containerName="registry-server" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.143327 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.150108 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.150123 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.150304 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.150453 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.196765 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-swxh9"] Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.231638 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ssh-key\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.231712 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ceph\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.231825 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-inventory\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.231892 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5n42\" (UniqueName: \"kubernetes.io/projected/bc459ceb-9b5a-42e9-a52d-68970c756a8e-kube-api-access-z5n42\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.334258 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5n42\" (UniqueName: \"kubernetes.io/projected/bc459ceb-9b5a-42e9-a52d-68970c756a8e-kube-api-access-z5n42\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.334376 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ssh-key\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.334410 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ceph\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.334541 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-inventory\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.340077 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ssh-key\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.340194 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-inventory\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.340524 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ceph\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.351085 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5n42\" (UniqueName: \"kubernetes.io/projected/bc459ceb-9b5a-42e9-a52d-68970c756a8e-kube-api-access-z5n42\") pod \"configure-network-openstack-openstack-cell1-swxh9\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:46 crc kubenswrapper[4772]: I1122 12:40:46.512359 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:40:47 crc kubenswrapper[4772]: I1122 12:40:47.099550 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-swxh9"] Nov 22 12:40:47 crc kubenswrapper[4772]: W1122 12:40:47.101946 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc459ceb_9b5a_42e9_a52d_68970c756a8e.slice/crio-f1136a0f9fdf76e1df07a961d10fae9a32d44d75abbd0996d6290995238f7665 WatchSource:0}: Error finding container f1136a0f9fdf76e1df07a961d10fae9a32d44d75abbd0996d6290995238f7665: Status 404 returned error can't find the container with id f1136a0f9fdf76e1df07a961d10fae9a32d44d75abbd0996d6290995238f7665 Nov 22 12:40:48 crc kubenswrapper[4772]: I1122 12:40:48.077798 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" event={"ID":"bc459ceb-9b5a-42e9-a52d-68970c756a8e","Type":"ContainerStarted","Data":"c78a3f67433aae7309eb122975c5f190c6edd536436b278738ba9c8a8215b70e"} Nov 22 12:40:48 crc kubenswrapper[4772]: I1122 12:40:48.078718 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" event={"ID":"bc459ceb-9b5a-42e9-a52d-68970c756a8e","Type":"ContainerStarted","Data":"f1136a0f9fdf76e1df07a961d10fae9a32d44d75abbd0996d6290995238f7665"} Nov 22 12:40:48 crc kubenswrapper[4772]: I1122 12:40:48.098823 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" podStartSLOduration=1.460315112 podStartE2EDuration="2.098805662s" podCreationTimestamp="2025-11-22 12:40:46 +0000 UTC" firstStartedPulling="2025-11-22 12:40:47.105007712 +0000 UTC m=+7367.344452206" lastFinishedPulling="2025-11-22 12:40:47.743498262 +0000 UTC m=+7367.982942756" observedRunningTime="2025-11-22 12:40:48.097802488 +0000 UTC m=+7368.337246982" watchObservedRunningTime="2025-11-22 12:40:48.098805662 +0000 UTC m=+7368.338250156" Nov 22 12:40:49 crc kubenswrapper[4772]: I1122 12:40:49.415358 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:40:49 crc kubenswrapper[4772]: E1122 12:40:49.416494 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:41:04 crc kubenswrapper[4772]: I1122 12:41:04.414685 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:41:04 crc kubenswrapper[4772]: E1122 12:41:04.415719 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:41:16 crc kubenswrapper[4772]: I1122 12:41:16.415372 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:41:16 crc kubenswrapper[4772]: E1122 12:41:16.416983 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:41:28 crc kubenswrapper[4772]: I1122 12:41:28.413866 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:41:28 crc kubenswrapper[4772]: E1122 12:41:28.415656 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:41:43 crc kubenswrapper[4772]: I1122 12:41:43.413927 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:41:43 crc kubenswrapper[4772]: E1122 12:41:43.414933 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:41:55 crc kubenswrapper[4772]: I1122 12:41:55.413903 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:41:55 crc kubenswrapper[4772]: E1122 12:41:55.414737 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:42:07 crc kubenswrapper[4772]: I1122 12:42:07.416137 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:42:07 crc kubenswrapper[4772]: E1122 12:42:07.420258 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:42:11 crc kubenswrapper[4772]: I1122 12:42:11.053899 4772 generic.go:334] "Generic (PLEG): container finished" podID="bc459ceb-9b5a-42e9-a52d-68970c756a8e" containerID="c78a3f67433aae7309eb122975c5f190c6edd536436b278738ba9c8a8215b70e" exitCode=0 Nov 22 12:42:11 crc kubenswrapper[4772]: I1122 12:42:11.053986 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" event={"ID":"bc459ceb-9b5a-42e9-a52d-68970c756a8e","Type":"ContainerDied","Data":"c78a3f67433aae7309eb122975c5f190c6edd536436b278738ba9c8a8215b70e"} Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.589541 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.666794 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5n42\" (UniqueName: \"kubernetes.io/projected/bc459ceb-9b5a-42e9-a52d-68970c756a8e-kube-api-access-z5n42\") pod \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.666960 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ceph\") pod \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.667011 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-inventory\") pod \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.667077 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ssh-key\") pod \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\" (UID: \"bc459ceb-9b5a-42e9-a52d-68970c756a8e\") " Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.677298 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ceph" (OuterVolumeSpecName: "ceph") pod "bc459ceb-9b5a-42e9-a52d-68970c756a8e" (UID: "bc459ceb-9b5a-42e9-a52d-68970c756a8e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.680311 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc459ceb-9b5a-42e9-a52d-68970c756a8e-kube-api-access-z5n42" (OuterVolumeSpecName: "kube-api-access-z5n42") pod "bc459ceb-9b5a-42e9-a52d-68970c756a8e" (UID: "bc459ceb-9b5a-42e9-a52d-68970c756a8e"). InnerVolumeSpecName "kube-api-access-z5n42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.705383 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bc459ceb-9b5a-42e9-a52d-68970c756a8e" (UID: "bc459ceb-9b5a-42e9-a52d-68970c756a8e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.712499 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-inventory" (OuterVolumeSpecName: "inventory") pod "bc459ceb-9b5a-42e9-a52d-68970c756a8e" (UID: "bc459ceb-9b5a-42e9-a52d-68970c756a8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.770714 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.770781 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.770801 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc459ceb-9b5a-42e9-a52d-68970c756a8e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:12 crc kubenswrapper[4772]: I1122 12:42:12.770818 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5n42\" (UniqueName: \"kubernetes.io/projected/bc459ceb-9b5a-42e9-a52d-68970c756a8e-kube-api-access-z5n42\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.085951 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" event={"ID":"bc459ceb-9b5a-42e9-a52d-68970c756a8e","Type":"ContainerDied","Data":"f1136a0f9fdf76e1df07a961d10fae9a32d44d75abbd0996d6290995238f7665"} Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.086012 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1136a0f9fdf76e1df07a961d10fae9a32d44d75abbd0996d6290995238f7665" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.086073 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-swxh9" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.182275 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-qkcpc"] Nov 22 12:42:13 crc kubenswrapper[4772]: E1122 12:42:13.182817 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc459ceb-9b5a-42e9-a52d-68970c756a8e" containerName="configure-network-openstack-openstack-cell1" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.182837 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc459ceb-9b5a-42e9-a52d-68970c756a8e" containerName="configure-network-openstack-openstack-cell1" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.183109 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc459ceb-9b5a-42e9-a52d-68970c756a8e" containerName="configure-network-openstack-openstack-cell1" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.183962 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.186809 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.187018 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.187161 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.187193 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.194948 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-qkcpc"] Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.283283 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj78s\" (UniqueName: \"kubernetes.io/projected/2cfd11cf-ca9f-44fe-90c4-372f6d436285-kube-api-access-bj78s\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.283649 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-inventory\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.283709 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ssh-key\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.283752 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ceph\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.386352 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ssh-key\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.386432 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ceph\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.386483 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj78s\" (UniqueName: \"kubernetes.io/projected/2cfd11cf-ca9f-44fe-90c4-372f6d436285-kube-api-access-bj78s\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.386638 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-inventory\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.392561 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ssh-key\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.392784 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-inventory\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.399731 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ceph\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.409486 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj78s\" (UniqueName: \"kubernetes.io/projected/2cfd11cf-ca9f-44fe-90c4-372f6d436285-kube-api-access-bj78s\") pod \"validate-network-openstack-openstack-cell1-qkcpc\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:13 crc kubenswrapper[4772]: I1122 12:42:13.517675 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:14 crc kubenswrapper[4772]: I1122 12:42:14.089664 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-qkcpc"] Nov 22 12:42:15 crc kubenswrapper[4772]: I1122 12:42:15.107029 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" event={"ID":"2cfd11cf-ca9f-44fe-90c4-372f6d436285","Type":"ContainerStarted","Data":"b1fb8307f80c6a2340bd3acee922a8d31e3d27092ab2a39a01d3f0c3a835f6bc"} Nov 22 12:42:15 crc kubenswrapper[4772]: I1122 12:42:15.107356 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" event={"ID":"2cfd11cf-ca9f-44fe-90c4-372f6d436285","Type":"ContainerStarted","Data":"d4c14907a89139a8a3f7ebd9b0f81ee52af9de56ec95f1693526d99ffcee8d33"} Nov 22 12:42:15 crc kubenswrapper[4772]: I1122 12:42:15.133396 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" podStartSLOduration=1.721371043 podStartE2EDuration="2.133376112s" podCreationTimestamp="2025-11-22 12:42:13 +0000 UTC" firstStartedPulling="2025-11-22 12:42:14.100022268 +0000 UTC m=+7454.339466782" lastFinishedPulling="2025-11-22 12:42:14.512027357 +0000 UTC m=+7454.751471851" observedRunningTime="2025-11-22 12:42:15.127912157 +0000 UTC m=+7455.367356651" watchObservedRunningTime="2025-11-22 12:42:15.133376112 +0000 UTC m=+7455.372820606" Nov 22 12:42:20 crc kubenswrapper[4772]: I1122 12:42:20.169956 4772 generic.go:334] "Generic (PLEG): container finished" podID="2cfd11cf-ca9f-44fe-90c4-372f6d436285" containerID="b1fb8307f80c6a2340bd3acee922a8d31e3d27092ab2a39a01d3f0c3a835f6bc" exitCode=0 Nov 22 12:42:20 crc kubenswrapper[4772]: I1122 12:42:20.170575 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" event={"ID":"2cfd11cf-ca9f-44fe-90c4-372f6d436285","Type":"ContainerDied","Data":"b1fb8307f80c6a2340bd3acee922a8d31e3d27092ab2a39a01d3f0c3a835f6bc"} Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.623092 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.713923 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj78s\" (UniqueName: \"kubernetes.io/projected/2cfd11cf-ca9f-44fe-90c4-372f6d436285-kube-api-access-bj78s\") pod \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.713995 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ssh-key\") pod \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.714016 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-inventory\") pod \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.714454 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ceph\") pod \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\" (UID: \"2cfd11cf-ca9f-44fe-90c4-372f6d436285\") " Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.720351 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfd11cf-ca9f-44fe-90c4-372f6d436285-kube-api-access-bj78s" (OuterVolumeSpecName: "kube-api-access-bj78s") pod "2cfd11cf-ca9f-44fe-90c4-372f6d436285" (UID: "2cfd11cf-ca9f-44fe-90c4-372f6d436285"). InnerVolumeSpecName "kube-api-access-bj78s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.720438 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ceph" (OuterVolumeSpecName: "ceph") pod "2cfd11cf-ca9f-44fe-90c4-372f6d436285" (UID: "2cfd11cf-ca9f-44fe-90c4-372f6d436285"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.753205 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2cfd11cf-ca9f-44fe-90c4-372f6d436285" (UID: "2cfd11cf-ca9f-44fe-90c4-372f6d436285"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.757225 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-inventory" (OuterVolumeSpecName: "inventory") pod "2cfd11cf-ca9f-44fe-90c4-372f6d436285" (UID: "2cfd11cf-ca9f-44fe-90c4-372f6d436285"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.817381 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.817423 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.817440 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj78s\" (UniqueName: \"kubernetes.io/projected/2cfd11cf-ca9f-44fe-90c4-372f6d436285-kube-api-access-bj78s\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:21 crc kubenswrapper[4772]: I1122 12:42:21.817455 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfd11cf-ca9f-44fe-90c4-372f6d436285-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.192174 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" event={"ID":"2cfd11cf-ca9f-44fe-90c4-372f6d436285","Type":"ContainerDied","Data":"d4c14907a89139a8a3f7ebd9b0f81ee52af9de56ec95f1693526d99ffcee8d33"} Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.192215 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c14907a89139a8a3f7ebd9b0f81ee52af9de56ec95f1693526d99ffcee8d33" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.192238 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-qkcpc" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.265259 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-p42sl"] Nov 22 12:42:22 crc kubenswrapper[4772]: E1122 12:42:22.265760 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfd11cf-ca9f-44fe-90c4-372f6d436285" containerName="validate-network-openstack-openstack-cell1" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.265785 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfd11cf-ca9f-44fe-90c4-372f6d436285" containerName="validate-network-openstack-openstack-cell1" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.266147 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfd11cf-ca9f-44fe-90c4-372f6d436285" containerName="validate-network-openstack-openstack-cell1" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.267430 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.269942 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.270224 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.270242 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.274545 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.292330 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-p42sl"] Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.414365 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:42:22 crc kubenswrapper[4772]: E1122 12:42:22.414699 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.432720 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-inventory\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.433126 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ceph\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.433176 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ssh-key\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.433221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvq6x\" (UniqueName: \"kubernetes.io/projected/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-kube-api-access-jvq6x\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.535914 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-inventory\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.535979 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ceph\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.536073 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ssh-key\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.536125 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvq6x\" (UniqueName: \"kubernetes.io/projected/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-kube-api-access-jvq6x\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.539990 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ssh-key\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.541276 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-inventory\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.545812 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ceph\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.553578 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvq6x\" (UniqueName: \"kubernetes.io/projected/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-kube-api-access-jvq6x\") pod \"install-os-openstack-openstack-cell1-p42sl\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:22 crc kubenswrapper[4772]: I1122 12:42:22.592916 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:42:23 crc kubenswrapper[4772]: I1122 12:42:23.163620 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-p42sl"] Nov 22 12:42:23 crc kubenswrapper[4772]: I1122 12:42:23.204802 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-p42sl" event={"ID":"5a543b1b-d97d-48fa-bc61-3ba67778aa3a","Type":"ContainerStarted","Data":"12be4031138854271813b6cfdf5be192e6f2e5142722795324300917488b5d39"} Nov 22 12:42:24 crc kubenswrapper[4772]: I1122 12:42:24.219973 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-p42sl" event={"ID":"5a543b1b-d97d-48fa-bc61-3ba67778aa3a","Type":"ContainerStarted","Data":"e75187ab24533455a06f2e0e166622b9d3edbd16a017c11e81f2f457c5849a10"} Nov 22 12:42:24 crc kubenswrapper[4772]: I1122 12:42:24.258259 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-p42sl" podStartSLOduration=1.76651885 podStartE2EDuration="2.258211238s" podCreationTimestamp="2025-11-22 12:42:22 +0000 UTC" firstStartedPulling="2025-11-22 12:42:23.177972545 +0000 UTC m=+7463.417417039" lastFinishedPulling="2025-11-22 12:42:23.669664923 +0000 UTC m=+7463.909109427" observedRunningTime="2025-11-22 12:42:24.2405105 +0000 UTC m=+7464.479955004" watchObservedRunningTime="2025-11-22 12:42:24.258211238 +0000 UTC m=+7464.497655772" Nov 22 12:42:36 crc kubenswrapper[4772]: I1122 12:42:36.414499 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:42:36 crc kubenswrapper[4772]: E1122 12:42:36.415253 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:42:48 crc kubenswrapper[4772]: I1122 12:42:48.414538 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:42:48 crc kubenswrapper[4772]: E1122 12:42:48.415823 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:43:02 crc kubenswrapper[4772]: I1122 12:43:02.415539 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:43:02 crc kubenswrapper[4772]: I1122 12:43:02.697500 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"4ac6c69657b0d84aba6f5f6730aff3b5b410649909e22ecab0cb77dec0de67e5"} Nov 22 12:43:07 crc kubenswrapper[4772]: I1122 12:43:07.760035 4772 generic.go:334] "Generic (PLEG): container finished" podID="5a543b1b-d97d-48fa-bc61-3ba67778aa3a" containerID="e75187ab24533455a06f2e0e166622b9d3edbd16a017c11e81f2f457c5849a10" exitCode=0 Nov 22 12:43:07 crc kubenswrapper[4772]: I1122 12:43:07.760110 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-p42sl" event={"ID":"5a543b1b-d97d-48fa-bc61-3ba67778aa3a","Type":"ContainerDied","Data":"e75187ab24533455a06f2e0e166622b9d3edbd16a017c11e81f2f457c5849a10"} Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.305303 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.489762 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-inventory\") pod \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.489904 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ceph\") pod \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.490350 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ssh-key\") pod \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.490657 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvq6x\" (UniqueName: \"kubernetes.io/projected/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-kube-api-access-jvq6x\") pod \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\" (UID: \"5a543b1b-d97d-48fa-bc61-3ba67778aa3a\") " Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.496151 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ceph" (OuterVolumeSpecName: "ceph") pod "5a543b1b-d97d-48fa-bc61-3ba67778aa3a" (UID: "5a543b1b-d97d-48fa-bc61-3ba67778aa3a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.496910 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-kube-api-access-jvq6x" (OuterVolumeSpecName: "kube-api-access-jvq6x") pod "5a543b1b-d97d-48fa-bc61-3ba67778aa3a" (UID: "5a543b1b-d97d-48fa-bc61-3ba67778aa3a"). InnerVolumeSpecName "kube-api-access-jvq6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.529665 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5a543b1b-d97d-48fa-bc61-3ba67778aa3a" (UID: "5a543b1b-d97d-48fa-bc61-3ba67778aa3a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.533702 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-inventory" (OuterVolumeSpecName: "inventory") pod "5a543b1b-d97d-48fa-bc61-3ba67778aa3a" (UID: "5a543b1b-d97d-48fa-bc61-3ba67778aa3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.600520 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.600577 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.600610 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvq6x\" (UniqueName: \"kubernetes.io/projected/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-kube-api-access-jvq6x\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.600622 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a543b1b-d97d-48fa-bc61-3ba67778aa3a-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.780515 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-p42sl" event={"ID":"5a543b1b-d97d-48fa-bc61-3ba67778aa3a","Type":"ContainerDied","Data":"12be4031138854271813b6cfdf5be192e6f2e5142722795324300917488b5d39"} Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.780563 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12be4031138854271813b6cfdf5be192e6f2e5142722795324300917488b5d39" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.780567 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-p42sl" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.876395 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-t29j2"] Nov 22 12:43:09 crc kubenswrapper[4772]: E1122 12:43:09.876843 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a543b1b-d97d-48fa-bc61-3ba67778aa3a" containerName="install-os-openstack-openstack-cell1" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.876861 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a543b1b-d97d-48fa-bc61-3ba67778aa3a" containerName="install-os-openstack-openstack-cell1" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.877143 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a543b1b-d97d-48fa-bc61-3ba67778aa3a" containerName="install-os-openstack-openstack-cell1" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.877888 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.880018 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.880428 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.880632 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.880962 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:43:09 crc kubenswrapper[4772]: I1122 12:43:09.887159 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-t29j2"] Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.007910 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62tq\" (UniqueName: \"kubernetes.io/projected/8470ca02-9bf5-4f87-80ea-55c09de031e8-kube-api-access-x62tq\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.008235 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ceph\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.008626 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-inventory\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.008759 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ssh-key\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.110692 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x62tq\" (UniqueName: \"kubernetes.io/projected/8470ca02-9bf5-4f87-80ea-55c09de031e8-kube-api-access-x62tq\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.110756 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ceph\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.110955 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-inventory\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.111013 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ssh-key\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.116715 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ceph\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.116839 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ssh-key\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.130230 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x62tq\" (UniqueName: \"kubernetes.io/projected/8470ca02-9bf5-4f87-80ea-55c09de031e8-kube-api-access-x62tq\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.134821 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-inventory\") pod \"configure-os-openstack-openstack-cell1-t29j2\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.196298 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:10 crc kubenswrapper[4772]: I1122 12:43:10.802846 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-t29j2"] Nov 22 12:43:11 crc kubenswrapper[4772]: I1122 12:43:11.831882 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" event={"ID":"8470ca02-9bf5-4f87-80ea-55c09de031e8","Type":"ContainerStarted","Data":"3c13cafb7e51b754a9ec29ac9fdd3c754a5b4da2f9058572a1e837d65de40135"} Nov 22 12:43:11 crc kubenswrapper[4772]: I1122 12:43:11.832669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" event={"ID":"8470ca02-9bf5-4f87-80ea-55c09de031e8","Type":"ContainerStarted","Data":"429a743a600ce9e3941365bbb6c2876ec6230577e88be668658c103a68b2b3e6"} Nov 22 12:43:11 crc kubenswrapper[4772]: I1122 12:43:11.850592 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" podStartSLOduration=2.376561628 podStartE2EDuration="2.85057326s" podCreationTimestamp="2025-11-22 12:43:09 +0000 UTC" firstStartedPulling="2025-11-22 12:43:10.809132476 +0000 UTC m=+7511.048577010" lastFinishedPulling="2025-11-22 12:43:11.283144108 +0000 UTC m=+7511.522588642" observedRunningTime="2025-11-22 12:43:11.849271038 +0000 UTC m=+7512.088715562" watchObservedRunningTime="2025-11-22 12:43:11.85057326 +0000 UTC m=+7512.090017754" Nov 22 12:43:57 crc kubenswrapper[4772]: I1122 12:43:57.303892 4772 generic.go:334] "Generic (PLEG): container finished" podID="8470ca02-9bf5-4f87-80ea-55c09de031e8" containerID="3c13cafb7e51b754a9ec29ac9fdd3c754a5b4da2f9058572a1e837d65de40135" exitCode=0 Nov 22 12:43:57 crc kubenswrapper[4772]: I1122 12:43:57.303993 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" event={"ID":"8470ca02-9bf5-4f87-80ea-55c09de031e8","Type":"ContainerDied","Data":"3c13cafb7e51b754a9ec29ac9fdd3c754a5b4da2f9058572a1e837d65de40135"} Nov 22 12:43:58 crc kubenswrapper[4772]: I1122 12:43:58.823976 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:58 crc kubenswrapper[4772]: I1122 12:43:58.987344 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ceph\") pod \"8470ca02-9bf5-4f87-80ea-55c09de031e8\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " Nov 22 12:43:58 crc kubenswrapper[4772]: I1122 12:43:58.987478 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x62tq\" (UniqueName: \"kubernetes.io/projected/8470ca02-9bf5-4f87-80ea-55c09de031e8-kube-api-access-x62tq\") pod \"8470ca02-9bf5-4f87-80ea-55c09de031e8\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " Nov 22 12:43:58 crc kubenswrapper[4772]: I1122 12:43:58.987551 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ssh-key\") pod \"8470ca02-9bf5-4f87-80ea-55c09de031e8\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " Nov 22 12:43:58 crc kubenswrapper[4772]: I1122 12:43:58.987824 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-inventory\") pod \"8470ca02-9bf5-4f87-80ea-55c09de031e8\" (UID: \"8470ca02-9bf5-4f87-80ea-55c09de031e8\") " Nov 22 12:43:58 crc kubenswrapper[4772]: I1122 12:43:58.993995 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8470ca02-9bf5-4f87-80ea-55c09de031e8-kube-api-access-x62tq" (OuterVolumeSpecName: "kube-api-access-x62tq") pod "8470ca02-9bf5-4f87-80ea-55c09de031e8" (UID: "8470ca02-9bf5-4f87-80ea-55c09de031e8"). InnerVolumeSpecName "kube-api-access-x62tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:43:58 crc kubenswrapper[4772]: I1122 12:43:58.994072 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ceph" (OuterVolumeSpecName: "ceph") pod "8470ca02-9bf5-4f87-80ea-55c09de031e8" (UID: "8470ca02-9bf5-4f87-80ea-55c09de031e8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.028956 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8470ca02-9bf5-4f87-80ea-55c09de031e8" (UID: "8470ca02-9bf5-4f87-80ea-55c09de031e8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.029288 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-inventory" (OuterVolumeSpecName: "inventory") pod "8470ca02-9bf5-4f87-80ea-55c09de031e8" (UID: "8470ca02-9bf5-4f87-80ea-55c09de031e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.090610 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.090648 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.090661 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8470ca02-9bf5-4f87-80ea-55c09de031e8-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.090673 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x62tq\" (UniqueName: \"kubernetes.io/projected/8470ca02-9bf5-4f87-80ea-55c09de031e8-kube-api-access-x62tq\") on node \"crc\" DevicePath \"\"" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.328290 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" event={"ID":"8470ca02-9bf5-4f87-80ea-55c09de031e8","Type":"ContainerDied","Data":"429a743a600ce9e3941365bbb6c2876ec6230577e88be668658c103a68b2b3e6"} Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.328332 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429a743a600ce9e3941365bbb6c2876ec6230577e88be668658c103a68b2b3e6" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.328993 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-t29j2" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.446435 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-s46xs"] Nov 22 12:43:59 crc kubenswrapper[4772]: E1122 12:43:59.447507 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8470ca02-9bf5-4f87-80ea-55c09de031e8" containerName="configure-os-openstack-openstack-cell1" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.447535 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="8470ca02-9bf5-4f87-80ea-55c09de031e8" containerName="configure-os-openstack-openstack-cell1" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.447896 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="8470ca02-9bf5-4f87-80ea-55c09de031e8" containerName="configure-os-openstack-openstack-cell1" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.449029 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.452704 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.453459 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.453651 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.463989 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-s46xs"] Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.465399 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.602948 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75vx\" (UniqueName: \"kubernetes.io/projected/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-kube-api-access-t75vx\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.603007 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ceph\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.603072 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.603533 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-inventory-0\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.705882 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-inventory-0\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.706018 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75vx\" (UniqueName: \"kubernetes.io/projected/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-kube-api-access-t75vx\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.706083 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ceph\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.706116 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.713218 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.713369 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-inventory-0\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.715524 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ceph\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.723805 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75vx\" (UniqueName: \"kubernetes.io/projected/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-kube-api-access-t75vx\") pod \"ssh-known-hosts-openstack-s46xs\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:43:59 crc kubenswrapper[4772]: I1122 12:43:59.770288 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:44:00 crc kubenswrapper[4772]: I1122 12:44:00.419333 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-s46xs"] Nov 22 12:44:00 crc kubenswrapper[4772]: W1122 12:44:00.424684 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d9e9bcb_f195_4aa9_86d4_531cd424d3b6.slice/crio-d6fcac463f1e48cdf60615991e96aa5e1b86df977f1ca27204903e5bf98de5ac WatchSource:0}: Error finding container d6fcac463f1e48cdf60615991e96aa5e1b86df977f1ca27204903e5bf98de5ac: Status 404 returned error can't find the container with id d6fcac463f1e48cdf60615991e96aa5e1b86df977f1ca27204903e5bf98de5ac Nov 22 12:44:01 crc kubenswrapper[4772]: I1122 12:44:01.349561 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-s46xs" event={"ID":"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6","Type":"ContainerStarted","Data":"4ee974c72756568c17065abb5cd8e1127bb9cad1e3bf81a6c2e51363936c4818"} Nov 22 12:44:01 crc kubenswrapper[4772]: I1122 12:44:01.350110 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-s46xs" event={"ID":"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6","Type":"ContainerStarted","Data":"d6fcac463f1e48cdf60615991e96aa5e1b86df977f1ca27204903e5bf98de5ac"} Nov 22 12:44:10 crc kubenswrapper[4772]: I1122 12:44:10.449576 4772 generic.go:334] "Generic (PLEG): container finished" podID="9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" containerID="4ee974c72756568c17065abb5cd8e1127bb9cad1e3bf81a6c2e51363936c4818" exitCode=0 Nov 22 12:44:10 crc kubenswrapper[4772]: I1122 12:44:10.449634 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-s46xs" event={"ID":"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6","Type":"ContainerDied","Data":"4ee974c72756568c17065abb5cd8e1127bb9cad1e3bf81a6c2e51363936c4818"} Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.022354 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.203122 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ceph\") pod \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.203341 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ssh-key-openstack-cell1\") pod \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.203505 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-inventory-0\") pod \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.203636 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t75vx\" (UniqueName: \"kubernetes.io/projected/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-kube-api-access-t75vx\") pod \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\" (UID: \"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6\") " Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.210353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ceph" (OuterVolumeSpecName: "ceph") pod "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" (UID: "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.219223 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-kube-api-access-t75vx" (OuterVolumeSpecName: "kube-api-access-t75vx") pod "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" (UID: "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6"). InnerVolumeSpecName "kube-api-access-t75vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.238928 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" (UID: "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.248134 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" (UID: "9d9e9bcb-f195-4aa9-86d4-531cd424d3b6"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.306613 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t75vx\" (UniqueName: \"kubernetes.io/projected/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-kube-api-access-t75vx\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.306657 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.306667 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.306676 4772 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d9e9bcb-f195-4aa9-86d4-531cd424d3b6-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.499375 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-s46xs" event={"ID":"9d9e9bcb-f195-4aa9-86d4-531cd424d3b6","Type":"ContainerDied","Data":"d6fcac463f1e48cdf60615991e96aa5e1b86df977f1ca27204903e5bf98de5ac"} Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.499417 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6fcac463f1e48cdf60615991e96aa5e1b86df977f1ca27204903e5bf98de5ac" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.499474 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-s46xs" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.563173 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-ss45z"] Nov 22 12:44:12 crc kubenswrapper[4772]: E1122 12:44:12.563621 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" containerName="ssh-known-hosts-openstack" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.563636 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" containerName="ssh-known-hosts-openstack" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.563896 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9e9bcb-f195-4aa9-86d4-531cd424d3b6" containerName="ssh-known-hosts-openstack" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.564783 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.569219 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.569333 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.569371 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.570942 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.584259 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-ss45z"] Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.717298 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ceph\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.717412 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-inventory\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.717467 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdzb\" (UniqueName: \"kubernetes.io/projected/11129a3e-cef5-417b-9b7d-1542708ac3bb-kube-api-access-8bdzb\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.717756 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ssh-key\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.821267 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ceph\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.821452 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-inventory\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.821560 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdzb\" (UniqueName: \"kubernetes.io/projected/11129a3e-cef5-417b-9b7d-1542708ac3bb-kube-api-access-8bdzb\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.821716 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ssh-key\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.827315 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ceph\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.830451 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ssh-key\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.832438 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-inventory\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.841734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdzb\" (UniqueName: \"kubernetes.io/projected/11129a3e-cef5-417b-9b7d-1542708ac3bb-kube-api-access-8bdzb\") pod \"run-os-openstack-openstack-cell1-ss45z\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:12 crc kubenswrapper[4772]: I1122 12:44:12.897256 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:13 crc kubenswrapper[4772]: I1122 12:44:13.466989 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-ss45z"] Nov 22 12:44:14 crc kubenswrapper[4772]: I1122 12:44:14.529298 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-ss45z" event={"ID":"11129a3e-cef5-417b-9b7d-1542708ac3bb","Type":"ContainerStarted","Data":"8e204502792f68dea94867f4fd2a5d51c0c14594bd607cca038ac87e05bcb036"} Nov 22 12:44:14 crc kubenswrapper[4772]: I1122 12:44:14.529658 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-ss45z" event={"ID":"11129a3e-cef5-417b-9b7d-1542708ac3bb","Type":"ContainerStarted","Data":"d8ed83f4198edecf804664d5c1ecc16ff60a94d2ce83d57a340f79cb2d9ea55a"} Nov 22 12:44:14 crc kubenswrapper[4772]: I1122 12:44:14.556798 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-ss45z" podStartSLOduration=1.883187822 podStartE2EDuration="2.55677814s" podCreationTimestamp="2025-11-22 12:44:12 +0000 UTC" firstStartedPulling="2025-11-22 12:44:13.516146746 +0000 UTC m=+7573.755591240" lastFinishedPulling="2025-11-22 12:44:14.189737064 +0000 UTC m=+7574.429181558" observedRunningTime="2025-11-22 12:44:14.551257494 +0000 UTC m=+7574.790702038" watchObservedRunningTime="2025-11-22 12:44:14.55677814 +0000 UTC m=+7574.796222634" Nov 22 12:44:24 crc kubenswrapper[4772]: I1122 12:44:24.632343 4772 generic.go:334] "Generic (PLEG): container finished" podID="11129a3e-cef5-417b-9b7d-1542708ac3bb" containerID="8e204502792f68dea94867f4fd2a5d51c0c14594bd607cca038ac87e05bcb036" exitCode=0 Nov 22 12:44:24 crc kubenswrapper[4772]: I1122 12:44:24.632415 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-ss45z" event={"ID":"11129a3e-cef5-417b-9b7d-1542708ac3bb","Type":"ContainerDied","Data":"8e204502792f68dea94867f4fd2a5d51c0c14594bd607cca038ac87e05bcb036"} Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.128991 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.231530 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-inventory\") pod \"11129a3e-cef5-417b-9b7d-1542708ac3bb\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.231662 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bdzb\" (UniqueName: \"kubernetes.io/projected/11129a3e-cef5-417b-9b7d-1542708ac3bb-kube-api-access-8bdzb\") pod \"11129a3e-cef5-417b-9b7d-1542708ac3bb\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.231768 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ceph\") pod \"11129a3e-cef5-417b-9b7d-1542708ac3bb\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.231827 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ssh-key\") pod \"11129a3e-cef5-417b-9b7d-1542708ac3bb\" (UID: \"11129a3e-cef5-417b-9b7d-1542708ac3bb\") " Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.238113 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ceph" (OuterVolumeSpecName: "ceph") pod "11129a3e-cef5-417b-9b7d-1542708ac3bb" (UID: "11129a3e-cef5-417b-9b7d-1542708ac3bb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.238138 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11129a3e-cef5-417b-9b7d-1542708ac3bb-kube-api-access-8bdzb" (OuterVolumeSpecName: "kube-api-access-8bdzb") pod "11129a3e-cef5-417b-9b7d-1542708ac3bb" (UID: "11129a3e-cef5-417b-9b7d-1542708ac3bb"). InnerVolumeSpecName "kube-api-access-8bdzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.262351 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-inventory" (OuterVolumeSpecName: "inventory") pod "11129a3e-cef5-417b-9b7d-1542708ac3bb" (UID: "11129a3e-cef5-417b-9b7d-1542708ac3bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.263422 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "11129a3e-cef5-417b-9b7d-1542708ac3bb" (UID: "11129a3e-cef5-417b-9b7d-1542708ac3bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.334824 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.334857 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.334867 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11129a3e-cef5-417b-9b7d-1542708ac3bb-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.334877 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bdzb\" (UniqueName: \"kubernetes.io/projected/11129a3e-cef5-417b-9b7d-1542708ac3bb-kube-api-access-8bdzb\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.654731 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-ss45z" event={"ID":"11129a3e-cef5-417b-9b7d-1542708ac3bb","Type":"ContainerDied","Data":"d8ed83f4198edecf804664d5c1ecc16ff60a94d2ce83d57a340f79cb2d9ea55a"} Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.654773 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-ss45z" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.654777 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ed83f4198edecf804664d5c1ecc16ff60a94d2ce83d57a340f79cb2d9ea55a" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.758368 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-wnpwf"] Nov 22 12:44:26 crc kubenswrapper[4772]: E1122 12:44:26.759642 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11129a3e-cef5-417b-9b7d-1542708ac3bb" containerName="run-os-openstack-openstack-cell1" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.759662 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="11129a3e-cef5-417b-9b7d-1542708ac3bb" containerName="run-os-openstack-openstack-cell1" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.760316 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="11129a3e-cef5-417b-9b7d-1542708ac3bb" containerName="run-os-openstack-openstack-cell1" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.761885 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.767929 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.768428 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.769583 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.769932 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.781098 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-wnpwf"] Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.860996 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.861091 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8dq\" (UniqueName: \"kubernetes.io/projected/49f54305-538d-4280-a76c-1590815fb686-kube-api-access-gh8dq\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.861158 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-inventory\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.861233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ceph\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.963145 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ceph\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.963358 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.963396 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8dq\" (UniqueName: \"kubernetes.io/projected/49f54305-538d-4280-a76c-1590815fb686-kube-api-access-gh8dq\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.963486 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-inventory\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.969291 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-inventory\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.970533 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.972159 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ceph\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:26 crc kubenswrapper[4772]: I1122 12:44:26.987343 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8dq\" (UniqueName: \"kubernetes.io/projected/49f54305-538d-4280-a76c-1590815fb686-kube-api-access-gh8dq\") pod \"reboot-os-openstack-openstack-cell1-wnpwf\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:27 crc kubenswrapper[4772]: I1122 12:44:27.090755 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:28 crc kubenswrapper[4772]: I1122 12:44:28.263361 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-wnpwf"] Nov 22 12:44:28 crc kubenswrapper[4772]: I1122 12:44:28.672799 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" event={"ID":"49f54305-538d-4280-a76c-1590815fb686","Type":"ContainerStarted","Data":"e3afae432d68d18f7bd4a6fb06f8b931806ab664aef1d4dbc421b18c60f9f85f"} Nov 22 12:44:29 crc kubenswrapper[4772]: I1122 12:44:29.683260 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" event={"ID":"49f54305-538d-4280-a76c-1590815fb686","Type":"ContainerStarted","Data":"2050906e2e3491191a8004aa543bb79c8af30bf3c82e59548803d40d89dd252c"} Nov 22 12:44:29 crc kubenswrapper[4772]: I1122 12:44:29.708666 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" podStartSLOduration=3.2589720079999998 podStartE2EDuration="3.708634628s" podCreationTimestamp="2025-11-22 12:44:26 +0000 UTC" firstStartedPulling="2025-11-22 12:44:28.281693151 +0000 UTC m=+7588.521137655" lastFinishedPulling="2025-11-22 12:44:28.731355781 +0000 UTC m=+7588.970800275" observedRunningTime="2025-11-22 12:44:29.700097447 +0000 UTC m=+7589.939541941" watchObservedRunningTime="2025-11-22 12:44:29.708634628 +0000 UTC m=+7589.948079122" Nov 22 12:44:44 crc kubenswrapper[4772]: I1122 12:44:44.870362 4772 generic.go:334] "Generic (PLEG): container finished" podID="49f54305-538d-4280-a76c-1590815fb686" containerID="2050906e2e3491191a8004aa543bb79c8af30bf3c82e59548803d40d89dd252c" exitCode=0 Nov 22 12:44:44 crc kubenswrapper[4772]: I1122 12:44:44.870484 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" event={"ID":"49f54305-538d-4280-a76c-1590815fb686","Type":"ContainerDied","Data":"2050906e2e3491191a8004aa543bb79c8af30bf3c82e59548803d40d89dd252c"} Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.325962 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.348410 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ssh-key\") pod \"49f54305-538d-4280-a76c-1590815fb686\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.348485 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ceph\") pod \"49f54305-538d-4280-a76c-1590815fb686\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.348503 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-inventory\") pod \"49f54305-538d-4280-a76c-1590815fb686\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.348631 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh8dq\" (UniqueName: \"kubernetes.io/projected/49f54305-538d-4280-a76c-1590815fb686-kube-api-access-gh8dq\") pod \"49f54305-538d-4280-a76c-1590815fb686\" (UID: \"49f54305-538d-4280-a76c-1590815fb686\") " Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.354467 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f54305-538d-4280-a76c-1590815fb686-kube-api-access-gh8dq" (OuterVolumeSpecName: "kube-api-access-gh8dq") pod "49f54305-538d-4280-a76c-1590815fb686" (UID: "49f54305-538d-4280-a76c-1590815fb686"). InnerVolumeSpecName "kube-api-access-gh8dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.355142 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ceph" (OuterVolumeSpecName: "ceph") pod "49f54305-538d-4280-a76c-1590815fb686" (UID: "49f54305-538d-4280-a76c-1590815fb686"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.389541 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-inventory" (OuterVolumeSpecName: "inventory") pod "49f54305-538d-4280-a76c-1590815fb686" (UID: "49f54305-538d-4280-a76c-1590815fb686"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.393903 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "49f54305-538d-4280-a76c-1590815fb686" (UID: "49f54305-538d-4280-a76c-1590815fb686"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.450847 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh8dq\" (UniqueName: \"kubernetes.io/projected/49f54305-538d-4280-a76c-1590815fb686-kube-api-access-gh8dq\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.450886 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.450897 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.450905 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f54305-538d-4280-a76c-1590815fb686-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.891601 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" event={"ID":"49f54305-538d-4280-a76c-1590815fb686","Type":"ContainerDied","Data":"e3afae432d68d18f7bd4a6fb06f8b931806ab664aef1d4dbc421b18c60f9f85f"} Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.891641 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3afae432d68d18f7bd4a6fb06f8b931806ab664aef1d4dbc421b18c60f9f85f" Nov 22 12:44:46 crc kubenswrapper[4772]: I1122 12:44:46.891662 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-wnpwf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.017285 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-5zpzf"] Nov 22 12:44:47 crc kubenswrapper[4772]: E1122 12:44:47.017807 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f54305-538d-4280-a76c-1590815fb686" containerName="reboot-os-openstack-openstack-cell1" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.017831 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f54305-538d-4280-a76c-1590815fb686" containerName="reboot-os-openstack-openstack-cell1" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.018133 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f54305-538d-4280-a76c-1590815fb686" containerName="reboot-os-openstack-openstack-cell1" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.019025 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.022395 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.022674 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.022681 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.022889 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.030279 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-5zpzf"] Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.164951 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ssh-key\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.165021 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ceph\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.165147 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.165681 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.165725 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.165812 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.165860 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.165936 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-inventory\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.166010 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.166095 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.166139 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzftc\" (UniqueName: \"kubernetes.io/projected/16477c52-8296-4d66-ad5f-78826cc5bab7-kube-api-access-bzftc\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.166180 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.267828 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ssh-key\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.267889 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ceph\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.267907 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.267951 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268004 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268080 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268108 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268134 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-inventory\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268190 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268213 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268238 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzftc\" (UniqueName: \"kubernetes.io/projected/16477c52-8296-4d66-ad5f-78826cc5bab7-kube-api-access-bzftc\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.268257 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.274017 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ceph\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.274576 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ssh-key\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.275283 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.275368 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.276337 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.276590 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.277356 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.278264 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-inventory\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.278604 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.281835 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.284199 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.302450 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzftc\" (UniqueName: \"kubernetes.io/projected/16477c52-8296-4d66-ad5f-78826cc5bab7-kube-api-access-bzftc\") pod \"install-certs-openstack-openstack-cell1-5zpzf\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.375312 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:44:47 crc kubenswrapper[4772]: I1122 12:44:47.934556 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-5zpzf"] Nov 22 12:44:47 crc kubenswrapper[4772]: W1122 12:44:47.942921 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16477c52_8296_4d66_ad5f_78826cc5bab7.slice/crio-4d8249ed0d165448db3b04e89f11093a65d1b96962cce08fb7109f70fa88611d WatchSource:0}: Error finding container 4d8249ed0d165448db3b04e89f11093a65d1b96962cce08fb7109f70fa88611d: Status 404 returned error can't find the container with id 4d8249ed0d165448db3b04e89f11093a65d1b96962cce08fb7109f70fa88611d Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.529158 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rp29t"] Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.533653 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.540930 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp29t"] Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.702658 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bp2c\" (UniqueName: \"kubernetes.io/projected/a2c7efdd-dec7-4533-af2e-df56b4face21-kube-api-access-6bp2c\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.702998 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-catalog-content\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.703037 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-utilities\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.806190 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bp2c\" (UniqueName: \"kubernetes.io/projected/a2c7efdd-dec7-4533-af2e-df56b4face21-kube-api-access-6bp2c\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.806287 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-catalog-content\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.806320 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-utilities\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.806834 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-catalog-content\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.806892 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-utilities\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.825991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bp2c\" (UniqueName: \"kubernetes.io/projected/a2c7efdd-dec7-4533-af2e-df56b4face21-kube-api-access-6bp2c\") pod \"redhat-marketplace-rp29t\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.912638 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" event={"ID":"16477c52-8296-4d66-ad5f-78826cc5bab7","Type":"ContainerStarted","Data":"cc20e466465592ff8fd0eb75183d410a439622148bb1255e0ca848bbee4a69c5"} Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.912688 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" event={"ID":"16477c52-8296-4d66-ad5f-78826cc5bab7","Type":"ContainerStarted","Data":"4d8249ed0d165448db3b04e89f11093a65d1b96962cce08fb7109f70fa88611d"} Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.913267 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:48 crc kubenswrapper[4772]: I1122 12:44:48.935362 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" podStartSLOduration=2.449739516 podStartE2EDuration="2.935342674s" podCreationTimestamp="2025-11-22 12:44:46 +0000 UTC" firstStartedPulling="2025-11-22 12:44:47.945711181 +0000 UTC m=+7608.185155685" lastFinishedPulling="2025-11-22 12:44:48.431314349 +0000 UTC m=+7608.670758843" observedRunningTime="2025-11-22 12:44:48.931741885 +0000 UTC m=+7609.171186379" watchObservedRunningTime="2025-11-22 12:44:48.935342674 +0000 UTC m=+7609.174787168" Nov 22 12:44:49 crc kubenswrapper[4772]: I1122 12:44:49.445014 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp29t"] Nov 22 12:44:49 crc kubenswrapper[4772]: W1122 12:44:49.450858 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2c7efdd_dec7_4533_af2e_df56b4face21.slice/crio-fb5ff71382c5cf920d3b89c8e1c8b5207ba390847fa10a19ac4d46b9bd760661 WatchSource:0}: Error finding container fb5ff71382c5cf920d3b89c8e1c8b5207ba390847fa10a19ac4d46b9bd760661: Status 404 returned error can't find the container with id fb5ff71382c5cf920d3b89c8e1c8b5207ba390847fa10a19ac4d46b9bd760661 Nov 22 12:44:49 crc kubenswrapper[4772]: I1122 12:44:49.925882 4772 generic.go:334] "Generic (PLEG): container finished" podID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerID="c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c" exitCode=0 Nov 22 12:44:49 crc kubenswrapper[4772]: I1122 12:44:49.925970 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp29t" event={"ID":"a2c7efdd-dec7-4533-af2e-df56b4face21","Type":"ContainerDied","Data":"c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c"} Nov 22 12:44:49 crc kubenswrapper[4772]: I1122 12:44:49.927289 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp29t" event={"ID":"a2c7efdd-dec7-4533-af2e-df56b4face21","Type":"ContainerStarted","Data":"fb5ff71382c5cf920d3b89c8e1c8b5207ba390847fa10a19ac4d46b9bd760661"} Nov 22 12:44:50 crc kubenswrapper[4772]: I1122 12:44:50.941970 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp29t" event={"ID":"a2c7efdd-dec7-4533-af2e-df56b4face21","Type":"ContainerStarted","Data":"5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9"} Nov 22 12:44:51 crc kubenswrapper[4772]: I1122 12:44:51.964396 4772 generic.go:334] "Generic (PLEG): container finished" podID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerID="5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9" exitCode=0 Nov 22 12:44:51 crc kubenswrapper[4772]: I1122 12:44:51.965069 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp29t" event={"ID":"a2c7efdd-dec7-4533-af2e-df56b4face21","Type":"ContainerDied","Data":"5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9"} Nov 22 12:44:51 crc kubenswrapper[4772]: I1122 12:44:51.985269 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:44:52 crc kubenswrapper[4772]: I1122 12:44:52.976844 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp29t" event={"ID":"a2c7efdd-dec7-4533-af2e-df56b4face21","Type":"ContainerStarted","Data":"1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9"} Nov 22 12:44:53 crc kubenswrapper[4772]: I1122 12:44:53.002221 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rp29t" podStartSLOduration=2.446939555 podStartE2EDuration="5.002198395s" podCreationTimestamp="2025-11-22 12:44:48 +0000 UTC" firstStartedPulling="2025-11-22 12:44:49.927632933 +0000 UTC m=+7610.167077467" lastFinishedPulling="2025-11-22 12:44:52.482891813 +0000 UTC m=+7612.722336307" observedRunningTime="2025-11-22 12:44:52.996165886 +0000 UTC m=+7613.235610380" watchObservedRunningTime="2025-11-22 12:44:53.002198395 +0000 UTC m=+7613.241642889" Nov 22 12:44:58 crc kubenswrapper[4772]: I1122 12:44:58.913772 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:58 crc kubenswrapper[4772]: I1122 12:44:58.914392 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:58 crc kubenswrapper[4772]: I1122 12:44:58.962762 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:59 crc kubenswrapper[4772]: I1122 12:44:59.103856 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:44:59 crc kubenswrapper[4772]: I1122 12:44:59.203345 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp29t"] Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.159915 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w"] Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.161687 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.164319 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.164381 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.181024 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w"] Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.186991 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a46bb21b-0c5a-418b-8b05-244477414c43-secret-volume\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.187233 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46bb21b-0c5a-418b-8b05-244477414c43-config-volume\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.187635 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkgfh\" (UniqueName: \"kubernetes.io/projected/a46bb21b-0c5a-418b-8b05-244477414c43-kube-api-access-mkgfh\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.292035 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46bb21b-0c5a-418b-8b05-244477414c43-config-volume\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.292173 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkgfh\" (UniqueName: \"kubernetes.io/projected/a46bb21b-0c5a-418b-8b05-244477414c43-kube-api-access-mkgfh\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.292314 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a46bb21b-0c5a-418b-8b05-244477414c43-secret-volume\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.293162 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46bb21b-0c5a-418b-8b05-244477414c43-config-volume\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.301256 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a46bb21b-0c5a-418b-8b05-244477414c43-secret-volume\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.309750 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkgfh\" (UniqueName: \"kubernetes.io/projected/a46bb21b-0c5a-418b-8b05-244477414c43-kube-api-access-mkgfh\") pod \"collect-profiles-29396925-s7j6w\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:00 crc kubenswrapper[4772]: I1122 12:45:00.501226 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.009755 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w"] Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.075029 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rp29t" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="registry-server" containerID="cri-o://1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9" gracePeriod=2 Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.075460 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" event={"ID":"a46bb21b-0c5a-418b-8b05-244477414c43","Type":"ContainerStarted","Data":"87b8a696470d025a09eb8ed5a8a531412ee37326ee1a94cf53d9d6de3091ad0d"} Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.618860 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.626151 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-utilities\") pod \"a2c7efdd-dec7-4533-af2e-df56b4face21\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.626262 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bp2c\" (UniqueName: \"kubernetes.io/projected/a2c7efdd-dec7-4533-af2e-df56b4face21-kube-api-access-6bp2c\") pod \"a2c7efdd-dec7-4533-af2e-df56b4face21\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.626456 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-catalog-content\") pod \"a2c7efdd-dec7-4533-af2e-df56b4face21\" (UID: \"a2c7efdd-dec7-4533-af2e-df56b4face21\") " Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.626961 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-utilities" (OuterVolumeSpecName: "utilities") pod "a2c7efdd-dec7-4533-af2e-df56b4face21" (UID: "a2c7efdd-dec7-4533-af2e-df56b4face21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.627650 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.634354 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c7efdd-dec7-4533-af2e-df56b4face21-kube-api-access-6bp2c" (OuterVolumeSpecName: "kube-api-access-6bp2c") pod "a2c7efdd-dec7-4533-af2e-df56b4face21" (UID: "a2c7efdd-dec7-4533-af2e-df56b4face21"). InnerVolumeSpecName "kube-api-access-6bp2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.653786 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2c7efdd-dec7-4533-af2e-df56b4face21" (UID: "a2c7efdd-dec7-4533-af2e-df56b4face21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.733730 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bp2c\" (UniqueName: \"kubernetes.io/projected/a2c7efdd-dec7-4533-af2e-df56b4face21-kube-api-access-6bp2c\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:01 crc kubenswrapper[4772]: I1122 12:45:01.733802 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c7efdd-dec7-4533-af2e-df56b4face21-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.091485 4772 generic.go:334] "Generic (PLEG): container finished" podID="a46bb21b-0c5a-418b-8b05-244477414c43" containerID="99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38" exitCode=0 Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.091600 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" event={"ID":"a46bb21b-0c5a-418b-8b05-244477414c43","Type":"ContainerDied","Data":"99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38"} Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.094956 4772 generic.go:334] "Generic (PLEG): container finished" podID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerID="1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9" exitCode=0 Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.095004 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp29t" event={"ID":"a2c7efdd-dec7-4533-af2e-df56b4face21","Type":"ContainerDied","Data":"1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9"} Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.095034 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp29t" event={"ID":"a2c7efdd-dec7-4533-af2e-df56b4face21","Type":"ContainerDied","Data":"fb5ff71382c5cf920d3b89c8e1c8b5207ba390847fa10a19ac4d46b9bd760661"} Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.095077 4772 scope.go:117] "RemoveContainer" containerID="1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.095086 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp29t" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.156731 4772 scope.go:117] "RemoveContainer" containerID="5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.161162 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp29t"] Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.172393 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp29t"] Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.198493 4772 scope.go:117] "RemoveContainer" containerID="c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.251394 4772 scope.go:117] "RemoveContainer" containerID="1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9" Nov 22 12:45:02 crc kubenswrapper[4772]: E1122 12:45:02.252238 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9\": container with ID starting with 1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9 not found: ID does not exist" containerID="1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.252280 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9"} err="failed to get container status \"1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9\": rpc error: code = NotFound desc = could not find container \"1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9\": container with ID starting with 1886db7d1a4eec1f9d899f9d8b2dcde9b70aab13e0e5aa89fb1dc748250a47e9 not found: ID does not exist" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.252305 4772 scope.go:117] "RemoveContainer" containerID="5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9" Nov 22 12:45:02 crc kubenswrapper[4772]: E1122 12:45:02.253094 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9\": container with ID starting with 5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9 not found: ID does not exist" containerID="5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.253112 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9"} err="failed to get container status \"5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9\": rpc error: code = NotFound desc = could not find container \"5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9\": container with ID starting with 5858bd05ae316958f7c3602ecea4f77e9acd6eb5465d2fed31911b21edd870e9 not found: ID does not exist" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.253124 4772 scope.go:117] "RemoveContainer" containerID="c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c" Nov 22 12:45:02 crc kubenswrapper[4772]: E1122 12:45:02.253638 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c\": container with ID starting with c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c not found: ID does not exist" containerID="c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c" Nov 22 12:45:02 crc kubenswrapper[4772]: I1122 12:45:02.253658 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c"} err="failed to get container status \"c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c\": rpc error: code = NotFound desc = could not find container \"c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c\": container with ID starting with c93c8f4e335be1cf1f7bb6b580e0fd9cbb40118e89bcc3330b0ac34954ac022c not found: ID does not exist" Nov 22 12:45:03 crc kubenswrapper[4772]: E1122 12:45:03.247153 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46bb21b_0c5a_418b_8b05_244477414c43.slice/crio-conmon-99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.450265 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" path="/var/lib/kubelet/pods/a2c7efdd-dec7-4533-af2e-df56b4face21/volumes" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.508335 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.584723 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkgfh\" (UniqueName: \"kubernetes.io/projected/a46bb21b-0c5a-418b-8b05-244477414c43-kube-api-access-mkgfh\") pod \"a46bb21b-0c5a-418b-8b05-244477414c43\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.584847 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a46bb21b-0c5a-418b-8b05-244477414c43-secret-volume\") pod \"a46bb21b-0c5a-418b-8b05-244477414c43\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.585078 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46bb21b-0c5a-418b-8b05-244477414c43-config-volume\") pod \"a46bb21b-0c5a-418b-8b05-244477414c43\" (UID: \"a46bb21b-0c5a-418b-8b05-244477414c43\") " Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.586156 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46bb21b-0c5a-418b-8b05-244477414c43-config-volume" (OuterVolumeSpecName: "config-volume") pod "a46bb21b-0c5a-418b-8b05-244477414c43" (UID: "a46bb21b-0c5a-418b-8b05-244477414c43"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.591395 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46bb21b-0c5a-418b-8b05-244477414c43-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a46bb21b-0c5a-418b-8b05-244477414c43" (UID: "a46bb21b-0c5a-418b-8b05-244477414c43"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.597384 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46bb21b-0c5a-418b-8b05-244477414c43-kube-api-access-mkgfh" (OuterVolumeSpecName: "kube-api-access-mkgfh") pod "a46bb21b-0c5a-418b-8b05-244477414c43" (UID: "a46bb21b-0c5a-418b-8b05-244477414c43"). InnerVolumeSpecName "kube-api-access-mkgfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.688186 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46bb21b-0c5a-418b-8b05-244477414c43-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.688224 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkgfh\" (UniqueName: \"kubernetes.io/projected/a46bb21b-0c5a-418b-8b05-244477414c43-kube-api-access-mkgfh\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:03 crc kubenswrapper[4772]: I1122 12:45:03.688235 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a46bb21b-0c5a-418b-8b05-244477414c43-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:04 crc kubenswrapper[4772]: I1122 12:45:04.116955 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" event={"ID":"a46bb21b-0c5a-418b-8b05-244477414c43","Type":"ContainerDied","Data":"87b8a696470d025a09eb8ed5a8a531412ee37326ee1a94cf53d9d6de3091ad0d"} Nov 22 12:45:04 crc kubenswrapper[4772]: I1122 12:45:04.117487 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87b8a696470d025a09eb8ed5a8a531412ee37326ee1a94cf53d9d6de3091ad0d" Nov 22 12:45:04 crc kubenswrapper[4772]: I1122 12:45:04.117001 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w" Nov 22 12:45:04 crc kubenswrapper[4772]: I1122 12:45:04.612662 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b"] Nov 22 12:45:04 crc kubenswrapper[4772]: I1122 12:45:04.623116 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396880-h269b"] Nov 22 12:45:05 crc kubenswrapper[4772]: I1122 12:45:05.433085 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13937ca4-2579-4c88-bfac-8cf50aeb2ffe" path="/var/lib/kubelet/pods/13937ca4-2579-4c88-bfac-8cf50aeb2ffe/volumes" Nov 22 12:45:08 crc kubenswrapper[4772]: I1122 12:45:08.164773 4772 generic.go:334] "Generic (PLEG): container finished" podID="16477c52-8296-4d66-ad5f-78826cc5bab7" containerID="cc20e466465592ff8fd0eb75183d410a439622148bb1255e0ca848bbee4a69c5" exitCode=0 Nov 22 12:45:08 crc kubenswrapper[4772]: I1122 12:45:08.164864 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" event={"ID":"16477c52-8296-4d66-ad5f-78826cc5bab7","Type":"ContainerDied","Data":"cc20e466465592ff8fd0eb75183d410a439622148bb1255e0ca848bbee4a69c5"} Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.685996 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.762799 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-bootstrap-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.762890 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-dhcp-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.762946 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ssh-key\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763042 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-metadata-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763140 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-libvirt-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763268 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-telemetry-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763303 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ovn-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763453 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-nova-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763519 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ceph\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763568 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-sriov-combined-ca-bundle\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763606 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-inventory\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.763656 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzftc\" (UniqueName: \"kubernetes.io/projected/16477c52-8296-4d66-ad5f-78826cc5bab7-kube-api-access-bzftc\") pod \"16477c52-8296-4d66-ad5f-78826cc5bab7\" (UID: \"16477c52-8296-4d66-ad5f-78826cc5bab7\") " Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.770607 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ceph" (OuterVolumeSpecName: "ceph") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.771173 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.771246 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.771284 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.771741 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.772766 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16477c52-8296-4d66-ad5f-78826cc5bab7-kube-api-access-bzftc" (OuterVolumeSpecName: "kube-api-access-bzftc") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "kube-api-access-bzftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.773980 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.776141 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.781491 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.799689 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.813231 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-inventory" (OuterVolumeSpecName: "inventory") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.817479 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "16477c52-8296-4d66-ad5f-78826cc5bab7" (UID: "16477c52-8296-4d66-ad5f-78826cc5bab7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866704 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866753 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzftc\" (UniqueName: \"kubernetes.io/projected/16477c52-8296-4d66-ad5f-78826cc5bab7-kube-api-access-bzftc\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866767 4772 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866780 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866792 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866806 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866815 4772 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866826 4772 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866838 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866848 4772 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866859 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:09 crc kubenswrapper[4772]: I1122 12:45:09.866869 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16477c52-8296-4d66-ad5f-78826cc5bab7-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.189132 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" event={"ID":"16477c52-8296-4d66-ad5f-78826cc5bab7","Type":"ContainerDied","Data":"4d8249ed0d165448db3b04e89f11093a65d1b96962cce08fb7109f70fa88611d"} Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.189238 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d8249ed0d165448db3b04e89f11093a65d1b96962cce08fb7109f70fa88611d" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.189154 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5zpzf" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.269928 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-c2qgd"] Nov 22 12:45:10 crc kubenswrapper[4772]: E1122 12:45:10.277657 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16477c52-8296-4d66-ad5f-78826cc5bab7" containerName="install-certs-openstack-openstack-cell1" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.277692 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="16477c52-8296-4d66-ad5f-78826cc5bab7" containerName="install-certs-openstack-openstack-cell1" Nov 22 12:45:10 crc kubenswrapper[4772]: E1122 12:45:10.277708 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46bb21b-0c5a-418b-8b05-244477414c43" containerName="collect-profiles" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.277715 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46bb21b-0c5a-418b-8b05-244477414c43" containerName="collect-profiles" Nov 22 12:45:10 crc kubenswrapper[4772]: E1122 12:45:10.277726 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="extract-content" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.277733 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="extract-content" Nov 22 12:45:10 crc kubenswrapper[4772]: E1122 12:45:10.277754 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="extract-utilities" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.277760 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="extract-utilities" Nov 22 12:45:10 crc kubenswrapper[4772]: E1122 12:45:10.277789 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="registry-server" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.277794 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="registry-server" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.278005 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c7efdd-dec7-4533-af2e-df56b4face21" containerName="registry-server" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.278018 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46bb21b-0c5a-418b-8b05-244477414c43" containerName="collect-profiles" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.278041 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="16477c52-8296-4d66-ad5f-78826cc5bab7" containerName="install-certs-openstack-openstack-cell1" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.278838 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.283499 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-c2qgd"] Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.284361 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.284480 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.284642 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.284808 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.379943 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvwl\" (UniqueName: \"kubernetes.io/projected/6938b21e-c1bf-418d-a88e-f39b7a771257-kube-api-access-jcvwl\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.379991 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ceph\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.380225 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.380403 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-inventory\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.482734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.482821 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-inventory\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.482941 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvwl\" (UniqueName: \"kubernetes.io/projected/6938b21e-c1bf-418d-a88e-f39b7a771257-kube-api-access-jcvwl\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.482965 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ceph\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.489991 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.490130 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-inventory\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.493105 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ceph\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.508474 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvwl\" (UniqueName: \"kubernetes.io/projected/6938b21e-c1bf-418d-a88e-f39b7a771257-kube-api-access-jcvwl\") pod \"ceph-client-openstack-openstack-cell1-c2qgd\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:10 crc kubenswrapper[4772]: I1122 12:45:10.607192 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:11 crc kubenswrapper[4772]: I1122 12:45:11.193905 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-c2qgd"] Nov 22 12:45:12 crc kubenswrapper[4772]: I1122 12:45:12.216461 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" event={"ID":"6938b21e-c1bf-418d-a88e-f39b7a771257","Type":"ContainerStarted","Data":"f2a4d1db832aaeb13bd770517791034707d90993afdbbb907c2218361f7f8b19"} Nov 22 12:45:12 crc kubenswrapper[4772]: I1122 12:45:12.216844 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" event={"ID":"6938b21e-c1bf-418d-a88e-f39b7a771257","Type":"ContainerStarted","Data":"6eb87869555b1b8edfeab96adce49a25cc30b253e1393260f05ac7af664bf610"} Nov 22 12:45:13 crc kubenswrapper[4772]: E1122 12:45:13.518692 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46bb21b_0c5a_418b_8b05_244477414c43.slice/crio-conmon-99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:45:17 crc kubenswrapper[4772]: I1122 12:45:17.269985 4772 generic.go:334] "Generic (PLEG): container finished" podID="6938b21e-c1bf-418d-a88e-f39b7a771257" containerID="f2a4d1db832aaeb13bd770517791034707d90993afdbbb907c2218361f7f8b19" exitCode=0 Nov 22 12:45:17 crc kubenswrapper[4772]: I1122 12:45:17.270138 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" event={"ID":"6938b21e-c1bf-418d-a88e-f39b7a771257","Type":"ContainerDied","Data":"f2a4d1db832aaeb13bd770517791034707d90993afdbbb907c2218361f7f8b19"} Nov 22 12:45:17 crc kubenswrapper[4772]: I1122 12:45:17.935679 4772 scope.go:117] "RemoveContainer" containerID="9fd02037dd8db2e11cc384d47dba4c6171992aeec48e9c9752cc14cec2dce0c1" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.840011 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.881262 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcvwl\" (UniqueName: \"kubernetes.io/projected/6938b21e-c1bf-418d-a88e-f39b7a771257-kube-api-access-jcvwl\") pod \"6938b21e-c1bf-418d-a88e-f39b7a771257\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.881412 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ceph\") pod \"6938b21e-c1bf-418d-a88e-f39b7a771257\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.881602 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-inventory\") pod \"6938b21e-c1bf-418d-a88e-f39b7a771257\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.881687 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ssh-key\") pod \"6938b21e-c1bf-418d-a88e-f39b7a771257\" (UID: \"6938b21e-c1bf-418d-a88e-f39b7a771257\") " Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.886835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ceph" (OuterVolumeSpecName: "ceph") pod "6938b21e-c1bf-418d-a88e-f39b7a771257" (UID: "6938b21e-c1bf-418d-a88e-f39b7a771257"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.887238 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6938b21e-c1bf-418d-a88e-f39b7a771257-kube-api-access-jcvwl" (OuterVolumeSpecName: "kube-api-access-jcvwl") pod "6938b21e-c1bf-418d-a88e-f39b7a771257" (UID: "6938b21e-c1bf-418d-a88e-f39b7a771257"). InnerVolumeSpecName "kube-api-access-jcvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.910370 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6938b21e-c1bf-418d-a88e-f39b7a771257" (UID: "6938b21e-c1bf-418d-a88e-f39b7a771257"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.913214 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-inventory" (OuterVolumeSpecName: "inventory") pod "6938b21e-c1bf-418d-a88e-f39b7a771257" (UID: "6938b21e-c1bf-418d-a88e-f39b7a771257"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.983725 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.983752 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.983762 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcvwl\" (UniqueName: \"kubernetes.io/projected/6938b21e-c1bf-418d-a88e-f39b7a771257-kube-api-access-jcvwl\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:18 crc kubenswrapper[4772]: I1122 12:45:18.983772 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6938b21e-c1bf-418d-a88e-f39b7a771257-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.291122 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" event={"ID":"6938b21e-c1bf-418d-a88e-f39b7a771257","Type":"ContainerDied","Data":"6eb87869555b1b8edfeab96adce49a25cc30b253e1393260f05ac7af664bf610"} Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.291160 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eb87869555b1b8edfeab96adce49a25cc30b253e1393260f05ac7af664bf610" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.291454 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-c2qgd" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.368531 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-9xmlp"] Nov 22 12:45:19 crc kubenswrapper[4772]: E1122 12:45:19.369483 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6938b21e-c1bf-418d-a88e-f39b7a771257" containerName="ceph-client-openstack-openstack-cell1" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.369508 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="6938b21e-c1bf-418d-a88e-f39b7a771257" containerName="ceph-client-openstack-openstack-cell1" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.369864 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="6938b21e-c1bf-418d-a88e-f39b7a771257" containerName="ceph-client-openstack-openstack-cell1" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.370943 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.373241 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.374099 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.374748 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.374781 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.376317 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.378743 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-9xmlp"] Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.492641 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-inventory\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.492759 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ceph\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.492849 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqk5f\" (UniqueName: \"kubernetes.io/projected/59226403-65f4-4187-96bc-5a0fe5da070c-kube-api-access-gqk5f\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.492877 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.492916 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/59226403-65f4-4187-96bc-5a0fe5da070c-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.492933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ssh-key\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.595669 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ceph\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.597138 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqk5f\" (UniqueName: \"kubernetes.io/projected/59226403-65f4-4187-96bc-5a0fe5da070c-kube-api-access-gqk5f\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.597220 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.597305 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/59226403-65f4-4187-96bc-5a0fe5da070c-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.597364 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ssh-key\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.597615 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-inventory\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.598962 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/59226403-65f4-4187-96bc-5a0fe5da070c-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.599843 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ceph\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.612475 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ssh-key\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.614405 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-inventory\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.614699 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.617930 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqk5f\" (UniqueName: \"kubernetes.io/projected/59226403-65f4-4187-96bc-5a0fe5da070c-kube-api-access-gqk5f\") pod \"ovn-openstack-openstack-cell1-9xmlp\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:19 crc kubenswrapper[4772]: I1122 12:45:19.688695 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:45:20 crc kubenswrapper[4772]: I1122 12:45:20.253942 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-9xmlp"] Nov 22 12:45:20 crc kubenswrapper[4772]: I1122 12:45:20.302404 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" event={"ID":"59226403-65f4-4187-96bc-5a0fe5da070c","Type":"ContainerStarted","Data":"708aed5e7ac78e98247ba5405fcbb7bc76ce895b1313d84765fca65df38ad402"} Nov 22 12:45:21 crc kubenswrapper[4772]: I1122 12:45:21.312113 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" event={"ID":"59226403-65f4-4187-96bc-5a0fe5da070c","Type":"ContainerStarted","Data":"f0739c6cba2aca09de498322eeb8565802dd291fa4980997b62bf6d6c983b3c4"} Nov 22 12:45:21 crc kubenswrapper[4772]: I1122 12:45:21.334349 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" podStartSLOduration=1.902431722 podStartE2EDuration="2.334329553s" podCreationTimestamp="2025-11-22 12:45:19 +0000 UTC" firstStartedPulling="2025-11-22 12:45:20.261940283 +0000 UTC m=+7640.501384777" lastFinishedPulling="2025-11-22 12:45:20.693838114 +0000 UTC m=+7640.933282608" observedRunningTime="2025-11-22 12:45:21.331973615 +0000 UTC m=+7641.571418129" watchObservedRunningTime="2025-11-22 12:45:21.334329553 +0000 UTC m=+7641.573774047" Nov 22 12:45:23 crc kubenswrapper[4772]: E1122 12:45:23.807703 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46bb21b_0c5a_418b_8b05_244477414c43.slice/crio-conmon-99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.441885 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mqdsb"] Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.447200 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mqdsb"] Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.447341 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.533000 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.533082 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.642221 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-catalog-content\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.642456 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqw8x\" (UniqueName: \"kubernetes.io/projected/76eb4da7-dedc-4bc9-be89-db15b956a9b9-kube-api-access-fqw8x\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.642606 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-utilities\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.744600 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-catalog-content\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.744680 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqw8x\" (UniqueName: \"kubernetes.io/projected/76eb4da7-dedc-4bc9-be89-db15b956a9b9-kube-api-access-fqw8x\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.744716 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-utilities\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.745283 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-utilities\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.745279 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-catalog-content\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.767212 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqw8x\" (UniqueName: \"kubernetes.io/projected/76eb4da7-dedc-4bc9-be89-db15b956a9b9-kube-api-access-fqw8x\") pod \"certified-operators-mqdsb\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:31 crc kubenswrapper[4772]: I1122 12:45:31.790360 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:32 crc kubenswrapper[4772]: I1122 12:45:32.371660 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mqdsb"] Nov 22 12:45:32 crc kubenswrapper[4772]: I1122 12:45:32.451428 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqdsb" event={"ID":"76eb4da7-dedc-4bc9-be89-db15b956a9b9","Type":"ContainerStarted","Data":"d82470d3c288de8c5faee4dcb2b1a202b8a9c47f7ab8c31788685598031eb60e"} Nov 22 12:45:33 crc kubenswrapper[4772]: I1122 12:45:33.464751 4772 generic.go:334] "Generic (PLEG): container finished" podID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerID="58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271" exitCode=0 Nov 22 12:45:33 crc kubenswrapper[4772]: I1122 12:45:33.465308 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqdsb" event={"ID":"76eb4da7-dedc-4bc9-be89-db15b956a9b9","Type":"ContainerDied","Data":"58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271"} Nov 22 12:45:34 crc kubenswrapper[4772]: E1122 12:45:34.094614 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46bb21b_0c5a_418b_8b05_244477414c43.slice/crio-conmon-99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:45:34 crc kubenswrapper[4772]: I1122 12:45:34.477308 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqdsb" event={"ID":"76eb4da7-dedc-4bc9-be89-db15b956a9b9","Type":"ContainerStarted","Data":"c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7"} Nov 22 12:45:36 crc kubenswrapper[4772]: I1122 12:45:36.498461 4772 generic.go:334] "Generic (PLEG): container finished" podID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerID="c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7" exitCode=0 Nov 22 12:45:36 crc kubenswrapper[4772]: I1122 12:45:36.498619 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqdsb" event={"ID":"76eb4da7-dedc-4bc9-be89-db15b956a9b9","Type":"ContainerDied","Data":"c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7"} Nov 22 12:45:37 crc kubenswrapper[4772]: I1122 12:45:37.513667 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqdsb" event={"ID":"76eb4da7-dedc-4bc9-be89-db15b956a9b9","Type":"ContainerStarted","Data":"50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324"} Nov 22 12:45:37 crc kubenswrapper[4772]: I1122 12:45:37.537228 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mqdsb" podStartSLOduration=3.033172798 podStartE2EDuration="6.537210321s" podCreationTimestamp="2025-11-22 12:45:31 +0000 UTC" firstStartedPulling="2025-11-22 12:45:33.468631137 +0000 UTC m=+7653.708075641" lastFinishedPulling="2025-11-22 12:45:36.97266867 +0000 UTC m=+7657.212113164" observedRunningTime="2025-11-22 12:45:37.533745745 +0000 UTC m=+7657.773190239" watchObservedRunningTime="2025-11-22 12:45:37.537210321 +0000 UTC m=+7657.776654815" Nov 22 12:45:41 crc kubenswrapper[4772]: I1122 12:45:41.790522 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:41 crc kubenswrapper[4772]: I1122 12:45:41.792248 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:41 crc kubenswrapper[4772]: I1122 12:45:41.850064 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:42 crc kubenswrapper[4772]: I1122 12:45:42.635809 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:42 crc kubenswrapper[4772]: I1122 12:45:42.683727 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mqdsb"] Nov 22 12:45:44 crc kubenswrapper[4772]: E1122 12:45:44.391618 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46bb21b_0c5a_418b_8b05_244477414c43.slice/crio-conmon-99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:45:44 crc kubenswrapper[4772]: I1122 12:45:44.581821 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mqdsb" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="registry-server" containerID="cri-o://50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324" gracePeriod=2 Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.099163 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.156365 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-catalog-content\") pod \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.156552 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqw8x\" (UniqueName: \"kubernetes.io/projected/76eb4da7-dedc-4bc9-be89-db15b956a9b9-kube-api-access-fqw8x\") pod \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.156721 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-utilities\") pod \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\" (UID: \"76eb4da7-dedc-4bc9-be89-db15b956a9b9\") " Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.157317 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-utilities" (OuterVolumeSpecName: "utilities") pod "76eb4da7-dedc-4bc9-be89-db15b956a9b9" (UID: "76eb4da7-dedc-4bc9-be89-db15b956a9b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.165343 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76eb4da7-dedc-4bc9-be89-db15b956a9b9-kube-api-access-fqw8x" (OuterVolumeSpecName: "kube-api-access-fqw8x") pod "76eb4da7-dedc-4bc9-be89-db15b956a9b9" (UID: "76eb4da7-dedc-4bc9-be89-db15b956a9b9"). InnerVolumeSpecName "kube-api-access-fqw8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.203690 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76eb4da7-dedc-4bc9-be89-db15b956a9b9" (UID: "76eb4da7-dedc-4bc9-be89-db15b956a9b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.258624 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.258661 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqw8x\" (UniqueName: \"kubernetes.io/projected/76eb4da7-dedc-4bc9-be89-db15b956a9b9-kube-api-access-fqw8x\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.258671 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76eb4da7-dedc-4bc9-be89-db15b956a9b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.596369 4772 generic.go:334] "Generic (PLEG): container finished" podID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerID="50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324" exitCode=0 Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.596463 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqdsb" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.596443 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqdsb" event={"ID":"76eb4da7-dedc-4bc9-be89-db15b956a9b9","Type":"ContainerDied","Data":"50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324"} Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.596636 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqdsb" event={"ID":"76eb4da7-dedc-4bc9-be89-db15b956a9b9","Type":"ContainerDied","Data":"d82470d3c288de8c5faee4dcb2b1a202b8a9c47f7ab8c31788685598031eb60e"} Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.596687 4772 scope.go:117] "RemoveContainer" containerID="50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.629672 4772 scope.go:117] "RemoveContainer" containerID="c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.638629 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mqdsb"] Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.658663 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mqdsb"] Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.659007 4772 scope.go:117] "RemoveContainer" containerID="58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.707811 4772 scope.go:117] "RemoveContainer" containerID="50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324" Nov 22 12:45:45 crc kubenswrapper[4772]: E1122 12:45:45.708137 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324\": container with ID starting with 50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324 not found: ID does not exist" containerID="50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.708181 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324"} err="failed to get container status \"50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324\": rpc error: code = NotFound desc = could not find container \"50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324\": container with ID starting with 50782a4d7d1e59ee4aef77b6c4fc746b6b8d98bc7412bdef4a504659b4c4d324 not found: ID does not exist" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.708209 4772 scope.go:117] "RemoveContainer" containerID="c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7" Nov 22 12:45:45 crc kubenswrapper[4772]: E1122 12:45:45.708414 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7\": container with ID starting with c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7 not found: ID does not exist" containerID="c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.708435 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7"} err="failed to get container status \"c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7\": rpc error: code = NotFound desc = could not find container \"c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7\": container with ID starting with c04ecc9b8911762d59348828fe3e9e993ea02ed227c01216286f87072887ebf7 not found: ID does not exist" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.708446 4772 scope.go:117] "RemoveContainer" containerID="58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271" Nov 22 12:45:45 crc kubenswrapper[4772]: E1122 12:45:45.708703 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271\": container with ID starting with 58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271 not found: ID does not exist" containerID="58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271" Nov 22 12:45:45 crc kubenswrapper[4772]: I1122 12:45:45.708724 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271"} err="failed to get container status \"58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271\": rpc error: code = NotFound desc = could not find container \"58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271\": container with ID starting with 58929bde85fbdcc0128e45bcf4d4ddbf6655dc23947589288e8758c8697ff271 not found: ID does not exist" Nov 22 12:45:47 crc kubenswrapper[4772]: I1122 12:45:47.430416 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" path="/var/lib/kubelet/pods/76eb4da7-dedc-4bc9-be89-db15b956a9b9/volumes" Nov 22 12:45:54 crc kubenswrapper[4772]: E1122 12:45:54.709952 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda46bb21b_0c5a_418b_8b05_244477414c43.slice/crio-conmon-99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38.scope\": RecentStats: unable to find data in memory cache]" Nov 22 12:46:01 crc kubenswrapper[4772]: I1122 12:46:01.532771 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:46:01 crc kubenswrapper[4772]: I1122 12:46:01.533388 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:46:27 crc kubenswrapper[4772]: I1122 12:46:27.100278 4772 generic.go:334] "Generic (PLEG): container finished" podID="59226403-65f4-4187-96bc-5a0fe5da070c" containerID="f0739c6cba2aca09de498322eeb8565802dd291fa4980997b62bf6d6c983b3c4" exitCode=0 Nov 22 12:46:27 crc kubenswrapper[4772]: I1122 12:46:27.100424 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" event={"ID":"59226403-65f4-4187-96bc-5a0fe5da070c","Type":"ContainerDied","Data":"f0739c6cba2aca09de498322eeb8565802dd291fa4980997b62bf6d6c983b3c4"} Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.658548 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.722892 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-inventory\") pod \"59226403-65f4-4187-96bc-5a0fe5da070c\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.723028 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/59226403-65f4-4187-96bc-5a0fe5da070c-ovncontroller-config-0\") pod \"59226403-65f4-4187-96bc-5a0fe5da070c\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.723125 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ovn-combined-ca-bundle\") pod \"59226403-65f4-4187-96bc-5a0fe5da070c\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.723216 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ssh-key\") pod \"59226403-65f4-4187-96bc-5a0fe5da070c\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.723392 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqk5f\" (UniqueName: \"kubernetes.io/projected/59226403-65f4-4187-96bc-5a0fe5da070c-kube-api-access-gqk5f\") pod \"59226403-65f4-4187-96bc-5a0fe5da070c\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.723425 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ceph\") pod \"59226403-65f4-4187-96bc-5a0fe5da070c\" (UID: \"59226403-65f4-4187-96bc-5a0fe5da070c\") " Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.741575 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ceph" (OuterVolumeSpecName: "ceph") pod "59226403-65f4-4187-96bc-5a0fe5da070c" (UID: "59226403-65f4-4187-96bc-5a0fe5da070c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.741629 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "59226403-65f4-4187-96bc-5a0fe5da070c" (UID: "59226403-65f4-4187-96bc-5a0fe5da070c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.741664 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59226403-65f4-4187-96bc-5a0fe5da070c-kube-api-access-gqk5f" (OuterVolumeSpecName: "kube-api-access-gqk5f") pod "59226403-65f4-4187-96bc-5a0fe5da070c" (UID: "59226403-65f4-4187-96bc-5a0fe5da070c"). InnerVolumeSpecName "kube-api-access-gqk5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.764196 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "59226403-65f4-4187-96bc-5a0fe5da070c" (UID: "59226403-65f4-4187-96bc-5a0fe5da070c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.766363 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59226403-65f4-4187-96bc-5a0fe5da070c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "59226403-65f4-4187-96bc-5a0fe5da070c" (UID: "59226403-65f4-4187-96bc-5a0fe5da070c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.778338 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-inventory" (OuterVolumeSpecName: "inventory") pod "59226403-65f4-4187-96bc-5a0fe5da070c" (UID: "59226403-65f4-4187-96bc-5a0fe5da070c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.826489 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.826535 4772 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/59226403-65f4-4187-96bc-5a0fe5da070c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.826547 4772 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.826557 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.826567 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqk5f\" (UniqueName: \"kubernetes.io/projected/59226403-65f4-4187-96bc-5a0fe5da070c-kube-api-access-gqk5f\") on node \"crc\" DevicePath \"\"" Nov 22 12:46:28 crc kubenswrapper[4772]: I1122 12:46:28.826577 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59226403-65f4-4187-96bc-5a0fe5da070c-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.125210 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" event={"ID":"59226403-65f4-4187-96bc-5a0fe5da070c","Type":"ContainerDied","Data":"708aed5e7ac78e98247ba5405fcbb7bc76ce895b1313d84765fca65df38ad402"} Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.125245 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="708aed5e7ac78e98247ba5405fcbb7bc76ce895b1313d84765fca65df38ad402" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.125298 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-9xmlp" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.286830 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-n6cc8"] Nov 22 12:46:29 crc kubenswrapper[4772]: E1122 12:46:29.287274 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="extract-utilities" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.287293 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="extract-utilities" Nov 22 12:46:29 crc kubenswrapper[4772]: E1122 12:46:29.287322 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59226403-65f4-4187-96bc-5a0fe5da070c" containerName="ovn-openstack-openstack-cell1" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.287329 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="59226403-65f4-4187-96bc-5a0fe5da070c" containerName="ovn-openstack-openstack-cell1" Nov 22 12:46:29 crc kubenswrapper[4772]: E1122 12:46:29.287336 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="extract-content" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.287343 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="extract-content" Nov 22 12:46:29 crc kubenswrapper[4772]: E1122 12:46:29.287369 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="registry-server" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.287374 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="registry-server" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.287583 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="59226403-65f4-4187-96bc-5a0fe5da070c" containerName="ovn-openstack-openstack-cell1" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.287601 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="76eb4da7-dedc-4bc9-be89-db15b956a9b9" containerName="registry-server" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.288420 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.291729 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.291960 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.292824 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.293023 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.294120 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.305949 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.314922 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-n6cc8"] Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.339741 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.340153 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.340268 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.340368 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.340446 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9qg\" (UniqueName: \"kubernetes.io/projected/7100bac9-b56a-4d24-9a30-528db4074857-kube-api-access-ph9qg\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.340566 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.340595 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.442840 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.442928 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.442974 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.443042 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph9qg\" (UniqueName: \"kubernetes.io/projected/7100bac9-b56a-4d24-9a30-528db4074857-kube-api-access-ph9qg\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.443155 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.443188 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.443235 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.449568 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.450359 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.452509 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.453652 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.456639 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.464660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.467956 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph9qg\" (UniqueName: \"kubernetes.io/projected/7100bac9-b56a-4d24-9a30-528db4074857-kube-api-access-ph9qg\") pod \"neutron-metadata-openstack-openstack-cell1-n6cc8\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:29 crc kubenswrapper[4772]: I1122 12:46:29.644175 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:46:30 crc kubenswrapper[4772]: I1122 12:46:30.217626 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-n6cc8"] Nov 22 12:46:31 crc kubenswrapper[4772]: I1122 12:46:31.146980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" event={"ID":"7100bac9-b56a-4d24-9a30-528db4074857","Type":"ContainerStarted","Data":"52dc8e31b90a53505cc09997b3cda4043e55b7d3e3ed3aee88138b3a9a002e3f"} Nov 22 12:46:31 crc kubenswrapper[4772]: I1122 12:46:31.533038 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:46:31 crc kubenswrapper[4772]: I1122 12:46:31.533168 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:46:31 crc kubenswrapper[4772]: I1122 12:46:31.533253 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:46:31 crc kubenswrapper[4772]: I1122 12:46:31.534735 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ac6c69657b0d84aba6f5f6730aff3b5b410649909e22ecab0cb77dec0de67e5"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:46:31 crc kubenswrapper[4772]: I1122 12:46:31.534828 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://4ac6c69657b0d84aba6f5f6730aff3b5b410649909e22ecab0cb77dec0de67e5" gracePeriod=600 Nov 22 12:46:32 crc kubenswrapper[4772]: I1122 12:46:32.159275 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" event={"ID":"7100bac9-b56a-4d24-9a30-528db4074857","Type":"ContainerStarted","Data":"165aa8314c3f451f23befb2b300aea855f6594a3444ead0d494d1ef19d899a8b"} Nov 22 12:46:32 crc kubenswrapper[4772]: I1122 12:46:32.163518 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="4ac6c69657b0d84aba6f5f6730aff3b5b410649909e22ecab0cb77dec0de67e5" exitCode=0 Nov 22 12:46:32 crc kubenswrapper[4772]: I1122 12:46:32.163546 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"4ac6c69657b0d84aba6f5f6730aff3b5b410649909e22ecab0cb77dec0de67e5"} Nov 22 12:46:32 crc kubenswrapper[4772]: I1122 12:46:32.163566 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05"} Nov 22 12:46:32 crc kubenswrapper[4772]: I1122 12:46:32.163582 4772 scope.go:117] "RemoveContainer" containerID="8a1530730f8f450880c3cb53240d2358c82f70f75fcd089df4a21b648bec0621" Nov 22 12:46:32 crc kubenswrapper[4772]: I1122 12:46:32.192249 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" podStartSLOduration=1.89136668 podStartE2EDuration="3.192225718s" podCreationTimestamp="2025-11-22 12:46:29 +0000 UTC" firstStartedPulling="2025-11-22 12:46:30.226946209 +0000 UTC m=+7710.466390723" lastFinishedPulling="2025-11-22 12:46:31.527805267 +0000 UTC m=+7711.767249761" observedRunningTime="2025-11-22 12:46:32.187368728 +0000 UTC m=+7712.426813232" watchObservedRunningTime="2025-11-22 12:46:32.192225718 +0000 UTC m=+7712.431670232" Nov 22 12:47:26 crc kubenswrapper[4772]: I1122 12:47:26.795291 4772 generic.go:334] "Generic (PLEG): container finished" podID="7100bac9-b56a-4d24-9a30-528db4074857" containerID="165aa8314c3f451f23befb2b300aea855f6594a3444ead0d494d1ef19d899a8b" exitCode=0 Nov 22 12:47:26 crc kubenswrapper[4772]: I1122 12:47:26.795349 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" event={"ID":"7100bac9-b56a-4d24-9a30-528db4074857","Type":"ContainerDied","Data":"165aa8314c3f451f23befb2b300aea855f6594a3444ead0d494d1ef19d899a8b"} Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.451129 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.556226 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ssh-key\") pod \"7100bac9-b56a-4d24-9a30-528db4074857\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.556363 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-nova-metadata-neutron-config-0\") pod \"7100bac9-b56a-4d24-9a30-528db4074857\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.556445 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ceph\") pod \"7100bac9-b56a-4d24-9a30-528db4074857\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.556605 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-metadata-combined-ca-bundle\") pod \"7100bac9-b56a-4d24-9a30-528db4074857\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.556679 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph9qg\" (UniqueName: \"kubernetes.io/projected/7100bac9-b56a-4d24-9a30-528db4074857-kube-api-access-ph9qg\") pod \"7100bac9-b56a-4d24-9a30-528db4074857\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.556772 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-inventory\") pod \"7100bac9-b56a-4d24-9a30-528db4074857\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.556972 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7100bac9-b56a-4d24-9a30-528db4074857\" (UID: \"7100bac9-b56a-4d24-9a30-528db4074857\") " Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.564887 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ceph" (OuterVolumeSpecName: "ceph") pod "7100bac9-b56a-4d24-9a30-528db4074857" (UID: "7100bac9-b56a-4d24-9a30-528db4074857"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.566308 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7100bac9-b56a-4d24-9a30-528db4074857-kube-api-access-ph9qg" (OuterVolumeSpecName: "kube-api-access-ph9qg") pod "7100bac9-b56a-4d24-9a30-528db4074857" (UID: "7100bac9-b56a-4d24-9a30-528db4074857"). InnerVolumeSpecName "kube-api-access-ph9qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.566815 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7100bac9-b56a-4d24-9a30-528db4074857" (UID: "7100bac9-b56a-4d24-9a30-528db4074857"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.591451 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7100bac9-b56a-4d24-9a30-528db4074857" (UID: "7100bac9-b56a-4d24-9a30-528db4074857"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.593789 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7100bac9-b56a-4d24-9a30-528db4074857" (UID: "7100bac9-b56a-4d24-9a30-528db4074857"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.594402 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7100bac9-b56a-4d24-9a30-528db4074857" (UID: "7100bac9-b56a-4d24-9a30-528db4074857"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.604871 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-inventory" (OuterVolumeSpecName: "inventory") pod "7100bac9-b56a-4d24-9a30-528db4074857" (UID: "7100bac9-b56a-4d24-9a30-528db4074857"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.662603 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.664245 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.664279 4772 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.664299 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.664314 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.664334 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph9qg\" (UniqueName: \"kubernetes.io/projected/7100bac9-b56a-4d24-9a30-528db4074857-kube-api-access-ph9qg\") on node \"crc\" DevicePath \"\"" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.664351 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7100bac9-b56a-4d24-9a30-528db4074857-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.826293 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" event={"ID":"7100bac9-b56a-4d24-9a30-528db4074857","Type":"ContainerDied","Data":"52dc8e31b90a53505cc09997b3cda4043e55b7d3e3ed3aee88138b3a9a002e3f"} Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.826366 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52dc8e31b90a53505cc09997b3cda4043e55b7d3e3ed3aee88138b3a9a002e3f" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.826408 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-n6cc8" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.938940 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vf8xp"] Nov 22 12:47:28 crc kubenswrapper[4772]: E1122 12:47:28.939589 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7100bac9-b56a-4d24-9a30-528db4074857" containerName="neutron-metadata-openstack-openstack-cell1" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.939612 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7100bac9-b56a-4d24-9a30-528db4074857" containerName="neutron-metadata-openstack-openstack-cell1" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.939872 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7100bac9-b56a-4d24-9a30-528db4074857" containerName="neutron-metadata-openstack-openstack-cell1" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.940717 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.944866 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.947923 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.948155 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.948337 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.948532 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.951612 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vf8xp"] Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.975937 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ssh-key\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.976019 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-inventory\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.976074 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.976133 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ceph\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.976216 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvz78\" (UniqueName: \"kubernetes.io/projected/aebecc6e-2ba7-423a-b983-f4698c836a86-kube-api-access-wvz78\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:28 crc kubenswrapper[4772]: I1122 12:47:28.976306 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.079441 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.079633 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ssh-key\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.079681 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-inventory\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.079706 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.079734 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ceph\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.079784 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvz78\" (UniqueName: \"kubernetes.io/projected/aebecc6e-2ba7-423a-b983-f4698c836a86-kube-api-access-wvz78\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.085371 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ssh-key\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.086185 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ceph\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.090194 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.090842 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.099354 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-inventory\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.112545 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvz78\" (UniqueName: \"kubernetes.io/projected/aebecc6e-2ba7-423a-b983-f4698c836a86-kube-api-access-wvz78\") pod \"libvirt-openstack-openstack-cell1-vf8xp\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.288670 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:47:29 crc kubenswrapper[4772]: I1122 12:47:29.898153 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-vf8xp"] Nov 22 12:47:30 crc kubenswrapper[4772]: I1122 12:47:30.858906 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" event={"ID":"aebecc6e-2ba7-423a-b983-f4698c836a86","Type":"ContainerStarted","Data":"23315c828ce4ef181f771259fda146d05bef5f613aa1ec56c2cb069599ba576d"} Nov 22 12:47:30 crc kubenswrapper[4772]: I1122 12:47:30.860006 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" event={"ID":"aebecc6e-2ba7-423a-b983-f4698c836a86","Type":"ContainerStarted","Data":"a63cfae1fcc71288f5199fc76ed9b973cd72af107f622d301f06e1ecd6f5a8ca"} Nov 22 12:47:30 crc kubenswrapper[4772]: I1122 12:47:30.882424 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" podStartSLOduration=2.329230107 podStartE2EDuration="2.882399436s" podCreationTimestamp="2025-11-22 12:47:28 +0000 UTC" firstStartedPulling="2025-11-22 12:47:29.890431285 +0000 UTC m=+7770.129875779" lastFinishedPulling="2025-11-22 12:47:30.443600574 +0000 UTC m=+7770.683045108" observedRunningTime="2025-11-22 12:47:30.878099889 +0000 UTC m=+7771.117544393" watchObservedRunningTime="2025-11-22 12:47:30.882399436 +0000 UTC m=+7771.121843930" Nov 22 12:48:31 crc kubenswrapper[4772]: I1122 12:48:31.533812 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:48:31 crc kubenswrapper[4772]: I1122 12:48:31.534484 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:49:01 crc kubenswrapper[4772]: I1122 12:49:01.533346 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:49:01 crc kubenswrapper[4772]: I1122 12:49:01.534080 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:49:31 crc kubenswrapper[4772]: I1122 12:49:31.533517 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:49:31 crc kubenswrapper[4772]: I1122 12:49:31.534508 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:49:31 crc kubenswrapper[4772]: I1122 12:49:31.534594 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:49:31 crc kubenswrapper[4772]: I1122 12:49:31.536027 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:49:31 crc kubenswrapper[4772]: I1122 12:49:31.536190 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" gracePeriod=600 Nov 22 12:49:31 crc kubenswrapper[4772]: E1122 12:49:31.684236 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:49:32 crc kubenswrapper[4772]: I1122 12:49:32.346203 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" exitCode=0 Nov 22 12:49:32 crc kubenswrapper[4772]: I1122 12:49:32.346257 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05"} Nov 22 12:49:32 crc kubenswrapper[4772]: I1122 12:49:32.346296 4772 scope.go:117] "RemoveContainer" containerID="4ac6c69657b0d84aba6f5f6730aff3b5b410649909e22ecab0cb77dec0de67e5" Nov 22 12:49:32 crc kubenswrapper[4772]: I1122 12:49:32.347163 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:49:32 crc kubenswrapper[4772]: E1122 12:49:32.350548 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.196261 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jxk96"] Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.199518 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.224998 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxk96"] Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.372617 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-catalog-content\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.372750 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-utilities\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.372790 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwk5\" (UniqueName: \"kubernetes.io/projected/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-kube-api-access-xfwk5\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.474436 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-utilities\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.474472 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwk5\" (UniqueName: \"kubernetes.io/projected/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-kube-api-access-xfwk5\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.474611 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-catalog-content\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.474968 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-utilities\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.475011 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-catalog-content\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.493299 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwk5\" (UniqueName: \"kubernetes.io/projected/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-kube-api-access-xfwk5\") pod \"community-operators-jxk96\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:43 crc kubenswrapper[4772]: I1122 12:49:43.538380 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:44 crc kubenswrapper[4772]: I1122 12:49:44.093466 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxk96"] Nov 22 12:49:44 crc kubenswrapper[4772]: I1122 12:49:44.495342 4772 generic.go:334] "Generic (PLEG): container finished" podID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerID="52e1e8486dfdf8e1b9a86c2c5306f46352e31fc280f8a4083dda83916de02544" exitCode=0 Nov 22 12:49:44 crc kubenswrapper[4772]: I1122 12:49:44.495399 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk96" event={"ID":"3438f9dc-9de3-45a6-b54a-1b718b6d46ca","Type":"ContainerDied","Data":"52e1e8486dfdf8e1b9a86c2c5306f46352e31fc280f8a4083dda83916de02544"} Nov 22 12:49:44 crc kubenswrapper[4772]: I1122 12:49:44.496386 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk96" event={"ID":"3438f9dc-9de3-45a6-b54a-1b718b6d46ca","Type":"ContainerStarted","Data":"e631c6e336a4f80a4459f2560fbe33254e8c458e1b3f1600d656418a0f7baf7f"} Nov 22 12:49:45 crc kubenswrapper[4772]: I1122 12:49:45.415288 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:49:45 crc kubenswrapper[4772]: E1122 12:49:45.416453 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:49:46 crc kubenswrapper[4772]: I1122 12:49:46.522421 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk96" event={"ID":"3438f9dc-9de3-45a6-b54a-1b718b6d46ca","Type":"ContainerStarted","Data":"6e7b8b00c3d327079428ecfb7cda635cc1179e6dd4b9080e21eef245803ecf9b"} Nov 22 12:49:47 crc kubenswrapper[4772]: I1122 12:49:47.535223 4772 generic.go:334] "Generic (PLEG): container finished" podID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerID="6e7b8b00c3d327079428ecfb7cda635cc1179e6dd4b9080e21eef245803ecf9b" exitCode=0 Nov 22 12:49:47 crc kubenswrapper[4772]: I1122 12:49:47.535633 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk96" event={"ID":"3438f9dc-9de3-45a6-b54a-1b718b6d46ca","Type":"ContainerDied","Data":"6e7b8b00c3d327079428ecfb7cda635cc1179e6dd4b9080e21eef245803ecf9b"} Nov 22 12:49:49 crc kubenswrapper[4772]: I1122 12:49:49.557781 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk96" event={"ID":"3438f9dc-9de3-45a6-b54a-1b718b6d46ca","Type":"ContainerStarted","Data":"4d97169a8b264164d2bfa4a221b6116cab109ccb777d78090a6ca8d820c7ed7e"} Nov 22 12:49:53 crc kubenswrapper[4772]: I1122 12:49:53.539127 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:53 crc kubenswrapper[4772]: I1122 12:49:53.539765 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:53 crc kubenswrapper[4772]: I1122 12:49:53.586710 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:53 crc kubenswrapper[4772]: I1122 12:49:53.605311 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jxk96" podStartSLOduration=6.783340531 podStartE2EDuration="10.605295677s" podCreationTimestamp="2025-11-22 12:49:43 +0000 UTC" firstStartedPulling="2025-11-22 12:49:44.496866109 +0000 UTC m=+7904.736310603" lastFinishedPulling="2025-11-22 12:49:48.318821255 +0000 UTC m=+7908.558265749" observedRunningTime="2025-11-22 12:49:49.583548031 +0000 UTC m=+7909.822992535" watchObservedRunningTime="2025-11-22 12:49:53.605295677 +0000 UTC m=+7913.844740171" Nov 22 12:49:53 crc kubenswrapper[4772]: I1122 12:49:53.661559 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:53 crc kubenswrapper[4772]: I1122 12:49:53.824094 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxk96"] Nov 22 12:49:55 crc kubenswrapper[4772]: I1122 12:49:55.637294 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jxk96" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="registry-server" containerID="cri-o://4d97169a8b264164d2bfa4a221b6116cab109ccb777d78090a6ca8d820c7ed7e" gracePeriod=2 Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.658678 4772 generic.go:334] "Generic (PLEG): container finished" podID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerID="4d97169a8b264164d2bfa4a221b6116cab109ccb777d78090a6ca8d820c7ed7e" exitCode=0 Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.658912 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk96" event={"ID":"3438f9dc-9de3-45a6-b54a-1b718b6d46ca","Type":"ContainerDied","Data":"4d97169a8b264164d2bfa4a221b6116cab109ccb777d78090a6ca8d820c7ed7e"} Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.658941 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxk96" event={"ID":"3438f9dc-9de3-45a6-b54a-1b718b6d46ca","Type":"ContainerDied","Data":"e631c6e336a4f80a4459f2560fbe33254e8c458e1b3f1600d656418a0f7baf7f"} Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.658951 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e631c6e336a4f80a4459f2560fbe33254e8c458e1b3f1600d656418a0f7baf7f" Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.724773 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.793665 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwk5\" (UniqueName: \"kubernetes.io/projected/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-kube-api-access-xfwk5\") pod \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.793729 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-catalog-content\") pod \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.793771 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-utilities\") pod \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\" (UID: \"3438f9dc-9de3-45a6-b54a-1b718b6d46ca\") " Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.794717 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-utilities" (OuterVolumeSpecName: "utilities") pod "3438f9dc-9de3-45a6-b54a-1b718b6d46ca" (UID: "3438f9dc-9de3-45a6-b54a-1b718b6d46ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.802618 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-kube-api-access-xfwk5" (OuterVolumeSpecName: "kube-api-access-xfwk5") pod "3438f9dc-9de3-45a6-b54a-1b718b6d46ca" (UID: "3438f9dc-9de3-45a6-b54a-1b718b6d46ca"). InnerVolumeSpecName "kube-api-access-xfwk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.896261 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwk5\" (UniqueName: \"kubernetes.io/projected/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-kube-api-access-xfwk5\") on node \"crc\" DevicePath \"\"" Nov 22 12:49:56 crc kubenswrapper[4772]: I1122 12:49:56.896335 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:49:57 crc kubenswrapper[4772]: I1122 12:49:57.672439 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxk96" Nov 22 12:49:58 crc kubenswrapper[4772]: I1122 12:49:58.774344 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3438f9dc-9de3-45a6-b54a-1b718b6d46ca" (UID: "3438f9dc-9de3-45a6-b54a-1b718b6d46ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:49:58 crc kubenswrapper[4772]: I1122 12:49:58.839314 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3438f9dc-9de3-45a6-b54a-1b718b6d46ca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:49:58 crc kubenswrapper[4772]: I1122 12:49:58.918145 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxk96"] Nov 22 12:49:58 crc kubenswrapper[4772]: I1122 12:49:58.929288 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jxk96"] Nov 22 12:49:59 crc kubenswrapper[4772]: I1122 12:49:59.415636 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:49:59 crc kubenswrapper[4772]: E1122 12:49:59.416794 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:49:59 crc kubenswrapper[4772]: I1122 12:49:59.431846 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" path="/var/lib/kubelet/pods/3438f9dc-9de3-45a6-b54a-1b718b6d46ca/volumes" Nov 22 12:50:12 crc kubenswrapper[4772]: I1122 12:50:12.415164 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:50:12 crc kubenswrapper[4772]: E1122 12:50:12.417426 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:50:23 crc kubenswrapper[4772]: I1122 12:50:23.413900 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:50:23 crc kubenswrapper[4772]: E1122 12:50:23.414710 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:50:35 crc kubenswrapper[4772]: I1122 12:50:35.420154 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:50:35 crc kubenswrapper[4772]: E1122 12:50:35.420964 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.204494 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxjlm"] Nov 22 12:50:42 crc kubenswrapper[4772]: E1122 12:50:42.206590 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="extract-content" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.206612 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="extract-content" Nov 22 12:50:42 crc kubenswrapper[4772]: E1122 12:50:42.206637 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="registry-server" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.206648 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="registry-server" Nov 22 12:50:42 crc kubenswrapper[4772]: E1122 12:50:42.206789 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="extract-utilities" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.206801 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="extract-utilities" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.207148 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3438f9dc-9de3-45a6-b54a-1b718b6d46ca" containerName="registry-server" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.209128 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.218712 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxjlm"] Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.251623 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-utilities\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.251693 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-catalog-content\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.251715 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jh2\" (UniqueName: \"kubernetes.io/projected/055a559e-dac7-4618-ad6c-b1b01967bd78-kube-api-access-64jh2\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.354226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-utilities\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.354310 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-catalog-content\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.354360 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64jh2\" (UniqueName: \"kubernetes.io/projected/055a559e-dac7-4618-ad6c-b1b01967bd78-kube-api-access-64jh2\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.355114 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-utilities\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.355308 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-catalog-content\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.381222 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64jh2\" (UniqueName: \"kubernetes.io/projected/055a559e-dac7-4618-ad6c-b1b01967bd78-kube-api-access-64jh2\") pod \"redhat-operators-hxjlm\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:42 crc kubenswrapper[4772]: I1122 12:50:42.568868 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:43 crc kubenswrapper[4772]: I1122 12:50:43.063274 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxjlm"] Nov 22 12:50:43 crc kubenswrapper[4772]: W1122 12:50:43.065420 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod055a559e_dac7_4618_ad6c_b1b01967bd78.slice/crio-deeba73b8758a047cb1bbc139c4cb3c633608a93dd06ab3854d7656571c3bebc WatchSource:0}: Error finding container deeba73b8758a047cb1bbc139c4cb3c633608a93dd06ab3854d7656571c3bebc: Status 404 returned error can't find the container with id deeba73b8758a047cb1bbc139c4cb3c633608a93dd06ab3854d7656571c3bebc Nov 22 12:50:43 crc kubenswrapper[4772]: I1122 12:50:43.185838 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxjlm" event={"ID":"055a559e-dac7-4618-ad6c-b1b01967bd78","Type":"ContainerStarted","Data":"deeba73b8758a047cb1bbc139c4cb3c633608a93dd06ab3854d7656571c3bebc"} Nov 22 12:50:44 crc kubenswrapper[4772]: I1122 12:50:44.197607 4772 generic.go:334] "Generic (PLEG): container finished" podID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerID="2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8" exitCode=0 Nov 22 12:50:44 crc kubenswrapper[4772]: I1122 12:50:44.197725 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxjlm" event={"ID":"055a559e-dac7-4618-ad6c-b1b01967bd78","Type":"ContainerDied","Data":"2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8"} Nov 22 12:50:44 crc kubenswrapper[4772]: I1122 12:50:44.200690 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:50:45 crc kubenswrapper[4772]: I1122 12:50:45.215694 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxjlm" event={"ID":"055a559e-dac7-4618-ad6c-b1b01967bd78","Type":"ContainerStarted","Data":"4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8"} Nov 22 12:50:47 crc kubenswrapper[4772]: I1122 12:50:47.414316 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:50:47 crc kubenswrapper[4772]: E1122 12:50:47.414792 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:50:50 crc kubenswrapper[4772]: I1122 12:50:50.273251 4772 generic.go:334] "Generic (PLEG): container finished" podID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerID="4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8" exitCode=0 Nov 22 12:50:50 crc kubenswrapper[4772]: I1122 12:50:50.273377 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxjlm" event={"ID":"055a559e-dac7-4618-ad6c-b1b01967bd78","Type":"ContainerDied","Data":"4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8"} Nov 22 12:50:51 crc kubenswrapper[4772]: I1122 12:50:51.288578 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxjlm" event={"ID":"055a559e-dac7-4618-ad6c-b1b01967bd78","Type":"ContainerStarted","Data":"0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636"} Nov 22 12:50:51 crc kubenswrapper[4772]: I1122 12:50:51.313999 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxjlm" podStartSLOduration=2.7912886 podStartE2EDuration="9.313968552s" podCreationTimestamp="2025-11-22 12:50:42 +0000 UTC" firstStartedPulling="2025-11-22 12:50:44.200326296 +0000 UTC m=+7964.439770790" lastFinishedPulling="2025-11-22 12:50:50.723006248 +0000 UTC m=+7970.962450742" observedRunningTime="2025-11-22 12:50:51.310297031 +0000 UTC m=+7971.549741525" watchObservedRunningTime="2025-11-22 12:50:51.313968552 +0000 UTC m=+7971.553413046" Nov 22 12:50:52 crc kubenswrapper[4772]: I1122 12:50:52.569326 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:52 crc kubenswrapper[4772]: I1122 12:50:52.569654 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:50:53 crc kubenswrapper[4772]: I1122 12:50:53.629105 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxjlm" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="registry-server" probeResult="failure" output=< Nov 22 12:50:53 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 12:50:53 crc kubenswrapper[4772]: > Nov 22 12:50:59 crc kubenswrapper[4772]: I1122 12:50:59.414186 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:50:59 crc kubenswrapper[4772]: E1122 12:50:59.415089 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:51:03 crc kubenswrapper[4772]: I1122 12:51:03.625415 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxjlm" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="registry-server" probeResult="failure" output=< Nov 22 12:51:03 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 12:51:03 crc kubenswrapper[4772]: > Nov 22 12:51:11 crc kubenswrapper[4772]: I1122 12:51:11.431382 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:51:11 crc kubenswrapper[4772]: E1122 12:51:11.432190 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:51:12 crc kubenswrapper[4772]: I1122 12:51:12.644270 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:51:12 crc kubenswrapper[4772]: I1122 12:51:12.707401 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:51:13 crc kubenswrapper[4772]: I1122 12:51:13.407233 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxjlm"] Nov 22 12:51:14 crc kubenswrapper[4772]: I1122 12:51:14.609346 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hxjlm" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="registry-server" containerID="cri-o://0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636" gracePeriod=2 Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.278654 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.377476 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-utilities\") pod \"055a559e-dac7-4618-ad6c-b1b01967bd78\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.377602 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64jh2\" (UniqueName: \"kubernetes.io/projected/055a559e-dac7-4618-ad6c-b1b01967bd78-kube-api-access-64jh2\") pod \"055a559e-dac7-4618-ad6c-b1b01967bd78\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.377804 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-catalog-content\") pod \"055a559e-dac7-4618-ad6c-b1b01967bd78\" (UID: \"055a559e-dac7-4618-ad6c-b1b01967bd78\") " Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.379257 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-utilities" (OuterVolumeSpecName: "utilities") pod "055a559e-dac7-4618-ad6c-b1b01967bd78" (UID: "055a559e-dac7-4618-ad6c-b1b01967bd78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.384345 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055a559e-dac7-4618-ad6c-b1b01967bd78-kube-api-access-64jh2" (OuterVolumeSpecName: "kube-api-access-64jh2") pod "055a559e-dac7-4618-ad6c-b1b01967bd78" (UID: "055a559e-dac7-4618-ad6c-b1b01967bd78"). InnerVolumeSpecName "kube-api-access-64jh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.479880 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.480319 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64jh2\" (UniqueName: \"kubernetes.io/projected/055a559e-dac7-4618-ad6c-b1b01967bd78-kube-api-access-64jh2\") on node \"crc\" DevicePath \"\"" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.484635 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "055a559e-dac7-4618-ad6c-b1b01967bd78" (UID: "055a559e-dac7-4618-ad6c-b1b01967bd78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.581714 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055a559e-dac7-4618-ad6c-b1b01967bd78-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.618963 4772 generic.go:334] "Generic (PLEG): container finished" podID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerID="0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636" exitCode=0 Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.619015 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxjlm" event={"ID":"055a559e-dac7-4618-ad6c-b1b01967bd78","Type":"ContainerDied","Data":"0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636"} Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.619086 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxjlm" event={"ID":"055a559e-dac7-4618-ad6c-b1b01967bd78","Type":"ContainerDied","Data":"deeba73b8758a047cb1bbc139c4cb3c633608a93dd06ab3854d7656571c3bebc"} Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.619111 4772 scope.go:117] "RemoveContainer" containerID="0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.619309 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxjlm" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.663108 4772 scope.go:117] "RemoveContainer" containerID="4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.672067 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxjlm"] Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.681153 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hxjlm"] Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.687423 4772 scope.go:117] "RemoveContainer" containerID="2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.743710 4772 scope.go:117] "RemoveContainer" containerID="0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636" Nov 22 12:51:15 crc kubenswrapper[4772]: E1122 12:51:15.744326 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636\": container with ID starting with 0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636 not found: ID does not exist" containerID="0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.744406 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636"} err="failed to get container status \"0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636\": rpc error: code = NotFound desc = could not find container \"0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636\": container with ID starting with 0babf557f6cb15e9ff85100b34ab270817db27a872d7d21d09ea5b071925d636 not found: ID does not exist" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.744440 4772 scope.go:117] "RemoveContainer" containerID="4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8" Nov 22 12:51:15 crc kubenswrapper[4772]: E1122 12:51:15.753884 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8\": container with ID starting with 4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8 not found: ID does not exist" containerID="4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.753931 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8"} err="failed to get container status \"4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8\": rpc error: code = NotFound desc = could not find container \"4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8\": container with ID starting with 4ded576e6691d0464679d31d709770e52997e834e3f89e6d6cb29b11591513a8 not found: ID does not exist" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.753976 4772 scope.go:117] "RemoveContainer" containerID="2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8" Nov 22 12:51:15 crc kubenswrapper[4772]: E1122 12:51:15.754514 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8\": container with ID starting with 2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8 not found: ID does not exist" containerID="2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8" Nov 22 12:51:15 crc kubenswrapper[4772]: I1122 12:51:15.754575 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8"} err="failed to get container status \"2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8\": rpc error: code = NotFound desc = could not find container \"2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8\": container with ID starting with 2eae87a2fce3a0654bc6c16fad89293a975557542bddb95d00142d4b9ae27dd8 not found: ID does not exist" Nov 22 12:51:17 crc kubenswrapper[4772]: I1122 12:51:17.429818 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" path="/var/lib/kubelet/pods/055a559e-dac7-4618-ad6c-b1b01967bd78/volumes" Nov 22 12:51:26 crc kubenswrapper[4772]: I1122 12:51:26.414695 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:51:26 crc kubenswrapper[4772]: E1122 12:51:26.415756 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:51:40 crc kubenswrapper[4772]: I1122 12:51:40.413860 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:51:40 crc kubenswrapper[4772]: E1122 12:51:40.414991 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:51:51 crc kubenswrapper[4772]: I1122 12:51:51.419741 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:51:51 crc kubenswrapper[4772]: E1122 12:51:51.420464 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:52:03 crc kubenswrapper[4772]: I1122 12:52:03.414292 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:52:03 crc kubenswrapper[4772]: E1122 12:52:03.415125 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:52:14 crc kubenswrapper[4772]: I1122 12:52:14.396904 4772 generic.go:334] "Generic (PLEG): container finished" podID="aebecc6e-2ba7-423a-b983-f4698c836a86" containerID="23315c828ce4ef181f771259fda146d05bef5f613aa1ec56c2cb069599ba576d" exitCode=0 Nov 22 12:52:14 crc kubenswrapper[4772]: I1122 12:52:14.397014 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" event={"ID":"aebecc6e-2ba7-423a-b983-f4698c836a86","Type":"ContainerDied","Data":"23315c828ce4ef181f771259fda146d05bef5f613aa1ec56c2cb069599ba576d"} Nov 22 12:52:15 crc kubenswrapper[4772]: I1122 12:52:15.417354 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:52:15 crc kubenswrapper[4772]: E1122 12:52:15.417838 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:52:15 crc kubenswrapper[4772]: I1122 12:52:15.939845 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.126843 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-inventory\") pod \"aebecc6e-2ba7-423a-b983-f4698c836a86\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.126959 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-combined-ca-bundle\") pod \"aebecc6e-2ba7-423a-b983-f4698c836a86\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.127032 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ceph\") pod \"aebecc6e-2ba7-423a-b983-f4698c836a86\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.127068 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-secret-0\") pod \"aebecc6e-2ba7-423a-b983-f4698c836a86\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.127106 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvz78\" (UniqueName: \"kubernetes.io/projected/aebecc6e-2ba7-423a-b983-f4698c836a86-kube-api-access-wvz78\") pod \"aebecc6e-2ba7-423a-b983-f4698c836a86\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.127843 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ssh-key\") pod \"aebecc6e-2ba7-423a-b983-f4698c836a86\" (UID: \"aebecc6e-2ba7-423a-b983-f4698c836a86\") " Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.133706 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "aebecc6e-2ba7-423a-b983-f4698c836a86" (UID: "aebecc6e-2ba7-423a-b983-f4698c836a86"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.135871 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ceph" (OuterVolumeSpecName: "ceph") pod "aebecc6e-2ba7-423a-b983-f4698c836a86" (UID: "aebecc6e-2ba7-423a-b983-f4698c836a86"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.143664 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aebecc6e-2ba7-423a-b983-f4698c836a86-kube-api-access-wvz78" (OuterVolumeSpecName: "kube-api-access-wvz78") pod "aebecc6e-2ba7-423a-b983-f4698c836a86" (UID: "aebecc6e-2ba7-423a-b983-f4698c836a86"). InnerVolumeSpecName "kube-api-access-wvz78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.168916 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-inventory" (OuterVolumeSpecName: "inventory") pod "aebecc6e-2ba7-423a-b983-f4698c836a86" (UID: "aebecc6e-2ba7-423a-b983-f4698c836a86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.169200 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aebecc6e-2ba7-423a-b983-f4698c836a86" (UID: "aebecc6e-2ba7-423a-b983-f4698c836a86"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.170955 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "aebecc6e-2ba7-423a-b983-f4698c836a86" (UID: "aebecc6e-2ba7-423a-b983-f4698c836a86"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.230489 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.230526 4772 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.230539 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvz78\" (UniqueName: \"kubernetes.io/projected/aebecc6e-2ba7-423a-b983-f4698c836a86-kube-api-access-wvz78\") on node \"crc\" DevicePath \"\"" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.230549 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.230557 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.230567 4772 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aebecc6e-2ba7-423a-b983-f4698c836a86-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.418457 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" event={"ID":"aebecc6e-2ba7-423a-b983-f4698c836a86","Type":"ContainerDied","Data":"a63cfae1fcc71288f5199fc76ed9b973cd72af107f622d301f06e1ecd6f5a8ca"} Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.418882 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63cfae1fcc71288f5199fc76ed9b973cd72af107f622d301f06e1ecd6f5a8ca" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.418954 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-vf8xp" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.528505 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-rfh4k"] Nov 22 12:52:16 crc kubenswrapper[4772]: E1122 12:52:16.528987 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aebecc6e-2ba7-423a-b983-f4698c836a86" containerName="libvirt-openstack-openstack-cell1" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.529004 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="aebecc6e-2ba7-423a-b983-f4698c836a86" containerName="libvirt-openstack-openstack-cell1" Nov 22 12:52:16 crc kubenswrapper[4772]: E1122 12:52:16.529016 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="registry-server" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.529022 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="registry-server" Nov 22 12:52:16 crc kubenswrapper[4772]: E1122 12:52:16.529049 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="extract-utilities" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.529071 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="extract-utilities" Nov 22 12:52:16 crc kubenswrapper[4772]: E1122 12:52:16.529094 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="extract-content" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.529099 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="extract-content" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.529304 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="aebecc6e-2ba7-423a-b983-f4698c836a86" containerName="libvirt-openstack-openstack-cell1" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.529324 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="055a559e-dac7-4618-ad6c-b1b01967bd78" containerName="registry-server" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.530131 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.533468 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.533615 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.533850 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.534233 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.534435 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.535191 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.535349 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.558702 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-rfh4k"] Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.638953 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639038 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639094 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv8pn\" (UniqueName: \"kubernetes.io/projected/4ab47679-8af5-427d-bac6-71a380ae0130-kube-api-access-zv8pn\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639123 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639144 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639217 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639240 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639258 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639280 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-inventory\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639324 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.639374 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ceph\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741516 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741566 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741584 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741608 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-inventory\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741651 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741694 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ceph\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741728 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741763 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741791 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv8pn\" (UniqueName: \"kubernetes.io/projected/4ab47679-8af5-427d-bac6-71a380ae0130-kube-api-access-zv8pn\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741816 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.741837 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.743589 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.743597 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.748815 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.750262 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.751077 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.753744 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ceph\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.755716 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-inventory\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.756495 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.758312 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.759640 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.791713 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv8pn\" (UniqueName: \"kubernetes.io/projected/4ab47679-8af5-427d-bac6-71a380ae0130-kube-api-access-zv8pn\") pod \"nova-cell1-openstack-openstack-cell1-rfh4k\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:16 crc kubenswrapper[4772]: I1122 12:52:16.859679 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:52:17 crc kubenswrapper[4772]: I1122 12:52:17.452119 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-rfh4k"] Nov 22 12:52:18 crc kubenswrapper[4772]: I1122 12:52:18.441980 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" event={"ID":"4ab47679-8af5-427d-bac6-71a380ae0130","Type":"ContainerStarted","Data":"6e7443d4e02f27a7e1c54b70a21ff27080d426f9c5bc6b1513a9043d612fd17c"} Nov 22 12:52:18 crc kubenswrapper[4772]: I1122 12:52:18.442565 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" event={"ID":"4ab47679-8af5-427d-bac6-71a380ae0130","Type":"ContainerStarted","Data":"4e540d942a7c40c3923393b8dd52b4ae9ce2ce543a8e8b8201edf6d1d593727d"} Nov 22 12:52:18 crc kubenswrapper[4772]: I1122 12:52:18.466835 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" podStartSLOduration=1.9744683699999999 podStartE2EDuration="2.466819045s" podCreationTimestamp="2025-11-22 12:52:16 +0000 UTC" firstStartedPulling="2025-11-22 12:52:17.515348466 +0000 UTC m=+8057.754792960" lastFinishedPulling="2025-11-22 12:52:18.007699141 +0000 UTC m=+8058.247143635" observedRunningTime="2025-11-22 12:52:18.4569058 +0000 UTC m=+8058.696350294" watchObservedRunningTime="2025-11-22 12:52:18.466819045 +0000 UTC m=+8058.706263539" Nov 22 12:52:27 crc kubenswrapper[4772]: I1122 12:52:27.414377 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:52:27 crc kubenswrapper[4772]: E1122 12:52:27.415104 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:52:41 crc kubenswrapper[4772]: I1122 12:52:41.424368 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:52:41 crc kubenswrapper[4772]: E1122 12:52:41.425234 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:52:56 crc kubenswrapper[4772]: I1122 12:52:56.414790 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:52:56 crc kubenswrapper[4772]: E1122 12:52:56.416307 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:53:10 crc kubenswrapper[4772]: I1122 12:53:10.414827 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:53:10 crc kubenswrapper[4772]: E1122 12:53:10.417896 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:53:23 crc kubenswrapper[4772]: I1122 12:53:23.414794 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:53:23 crc kubenswrapper[4772]: E1122 12:53:23.416120 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:53:38 crc kubenswrapper[4772]: I1122 12:53:38.417432 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:53:38 crc kubenswrapper[4772]: E1122 12:53:38.418833 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:53:52 crc kubenswrapper[4772]: I1122 12:53:52.414374 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:53:52 crc kubenswrapper[4772]: E1122 12:53:52.415887 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:54:04 crc kubenswrapper[4772]: I1122 12:54:04.413988 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:54:04 crc kubenswrapper[4772]: E1122 12:54:04.414851 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:54:16 crc kubenswrapper[4772]: I1122 12:54:16.417478 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:54:16 crc kubenswrapper[4772]: E1122 12:54:16.420235 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:54:27 crc kubenswrapper[4772]: I1122 12:54:27.415628 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:54:27 crc kubenswrapper[4772]: E1122 12:54:27.416600 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 12:54:38 crc kubenswrapper[4772]: I1122 12:54:38.425115 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:54:39 crc kubenswrapper[4772]: I1122 12:54:39.081626 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"3628acab72a50b4358fc44a519d84d0c10d69eb2dbf2ce2890f222362bc6e085"} Nov 22 12:54:53 crc kubenswrapper[4772]: I1122 12:54:53.877574 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h8rch"] Nov 22 12:54:53 crc kubenswrapper[4772]: I1122 12:54:53.883358 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:53 crc kubenswrapper[4772]: I1122 12:54:53.894304 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8rch"] Nov 22 12:54:53 crc kubenswrapper[4772]: I1122 12:54:53.915905 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27259\" (UniqueName: \"kubernetes.io/projected/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-kube-api-access-27259\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:53 crc kubenswrapper[4772]: I1122 12:54:53.916517 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-catalog-content\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:53 crc kubenswrapper[4772]: I1122 12:54:53.917272 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-utilities\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.019328 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-utilities\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.019479 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27259\" (UniqueName: \"kubernetes.io/projected/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-kube-api-access-27259\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.019521 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-catalog-content\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.019836 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-utilities\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.020077 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-catalog-content\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.043695 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27259\" (UniqueName: \"kubernetes.io/projected/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-kube-api-access-27259\") pod \"redhat-marketplace-h8rch\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.210391 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:54:54 crc kubenswrapper[4772]: I1122 12:54:54.756342 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8rch"] Nov 22 12:54:54 crc kubenswrapper[4772]: W1122 12:54:54.761509 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6ad566_1c4c_4ebe_8509_dd3f67ea2041.slice/crio-47f4e49a2b0f8b4b1639f1917a758b74c0a17fc33ac8c59e04e3fb4e1fb38d8e WatchSource:0}: Error finding container 47f4e49a2b0f8b4b1639f1917a758b74c0a17fc33ac8c59e04e3fb4e1fb38d8e: Status 404 returned error can't find the container with id 47f4e49a2b0f8b4b1639f1917a758b74c0a17fc33ac8c59e04e3fb4e1fb38d8e Nov 22 12:54:55 crc kubenswrapper[4772]: I1122 12:54:55.255090 4772 generic.go:334] "Generic (PLEG): container finished" podID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerID="e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea" exitCode=0 Nov 22 12:54:55 crc kubenswrapper[4772]: I1122 12:54:55.255182 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8rch" event={"ID":"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041","Type":"ContainerDied","Data":"e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea"} Nov 22 12:54:55 crc kubenswrapper[4772]: I1122 12:54:55.255441 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8rch" event={"ID":"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041","Type":"ContainerStarted","Data":"47f4e49a2b0f8b4b1639f1917a758b74c0a17fc33ac8c59e04e3fb4e1fb38d8e"} Nov 22 12:54:57 crc kubenswrapper[4772]: I1122 12:54:57.278259 4772 generic.go:334] "Generic (PLEG): container finished" podID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerID="2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1" exitCode=0 Nov 22 12:54:57 crc kubenswrapper[4772]: I1122 12:54:57.278345 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8rch" event={"ID":"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041","Type":"ContainerDied","Data":"2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1"} Nov 22 12:54:58 crc kubenswrapper[4772]: I1122 12:54:58.301881 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8rch" event={"ID":"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041","Type":"ContainerStarted","Data":"111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a"} Nov 22 12:54:58 crc kubenswrapper[4772]: I1122 12:54:58.337016 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h8rch" podStartSLOduration=2.908627553 podStartE2EDuration="5.336992535s" podCreationTimestamp="2025-11-22 12:54:53 +0000 UTC" firstStartedPulling="2025-11-22 12:54:55.257596333 +0000 UTC m=+8215.497040827" lastFinishedPulling="2025-11-22 12:54:57.685961315 +0000 UTC m=+8217.925405809" observedRunningTime="2025-11-22 12:54:58.331215532 +0000 UTC m=+8218.570660026" watchObservedRunningTime="2025-11-22 12:54:58.336992535 +0000 UTC m=+8218.576437029" Nov 22 12:55:04 crc kubenswrapper[4772]: I1122 12:55:04.211292 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:55:04 crc kubenswrapper[4772]: I1122 12:55:04.211862 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:55:04 crc kubenswrapper[4772]: I1122 12:55:04.269448 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:55:04 crc kubenswrapper[4772]: I1122 12:55:04.472146 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:55:04 crc kubenswrapper[4772]: I1122 12:55:04.527956 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8rch"] Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.421091 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h8rch" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="registry-server" containerID="cri-o://111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a" gracePeriod=2 Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.933297 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.977217 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-catalog-content\") pod \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.977307 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27259\" (UniqueName: \"kubernetes.io/projected/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-kube-api-access-27259\") pod \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.977338 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-utilities\") pod \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\" (UID: \"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041\") " Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.978760 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-utilities" (OuterVolumeSpecName: "utilities") pod "4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" (UID: "4c6ad566-1c4c-4ebe-8509-dd3f67ea2041"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.984729 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-kube-api-access-27259" (OuterVolumeSpecName: "kube-api-access-27259") pod "4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" (UID: "4c6ad566-1c4c-4ebe-8509-dd3f67ea2041"). InnerVolumeSpecName "kube-api-access-27259". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:55:06 crc kubenswrapper[4772]: I1122 12:55:06.997188 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" (UID: "4c6ad566-1c4c-4ebe-8509-dd3f67ea2041"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.079585 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.079622 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27259\" (UniqueName: \"kubernetes.io/projected/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-kube-api-access-27259\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.079634 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.434012 4772 generic.go:334] "Generic (PLEG): container finished" podID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerID="111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a" exitCode=0 Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.434090 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8rch" event={"ID":"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041","Type":"ContainerDied","Data":"111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a"} Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.434120 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8rch" event={"ID":"4c6ad566-1c4c-4ebe-8509-dd3f67ea2041","Type":"ContainerDied","Data":"47f4e49a2b0f8b4b1639f1917a758b74c0a17fc33ac8c59e04e3fb4e1fb38d8e"} Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.434141 4772 scope.go:117] "RemoveContainer" containerID="111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.434302 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8rch" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.499825 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8rch"] Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.507505 4772 scope.go:117] "RemoveContainer" containerID="2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.510367 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8rch"] Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.549440 4772 scope.go:117] "RemoveContainer" containerID="e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.598788 4772 scope.go:117] "RemoveContainer" containerID="111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a" Nov 22 12:55:07 crc kubenswrapper[4772]: E1122 12:55:07.599184 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a\": container with ID starting with 111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a not found: ID does not exist" containerID="111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.599211 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a"} err="failed to get container status \"111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a\": rpc error: code = NotFound desc = could not find container \"111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a\": container with ID starting with 111e584e67f7c1f46675782ddc45e7ccfaef99c3883d4c371bd6ed44660e0b2a not found: ID does not exist" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.599233 4772 scope.go:117] "RemoveContainer" containerID="2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1" Nov 22 12:55:07 crc kubenswrapper[4772]: E1122 12:55:07.599528 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1\": container with ID starting with 2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1 not found: ID does not exist" containerID="2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.599569 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1"} err="failed to get container status \"2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1\": rpc error: code = NotFound desc = could not find container \"2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1\": container with ID starting with 2dd63e2543fe67f6d452ac805354ff5b673985a4fbfee3d22863310fe136c7c1 not found: ID does not exist" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.599597 4772 scope.go:117] "RemoveContainer" containerID="e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea" Nov 22 12:55:07 crc kubenswrapper[4772]: E1122 12:55:07.599857 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea\": container with ID starting with e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea not found: ID does not exist" containerID="e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea" Nov 22 12:55:07 crc kubenswrapper[4772]: I1122 12:55:07.599896 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea"} err="failed to get container status \"e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea\": rpc error: code = NotFound desc = could not find container \"e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea\": container with ID starting with e2d6ee32d65c38d3bf495b194beef2c793b490b85ee66c421bb7c9c043864dea not found: ID does not exist" Nov 22 12:55:09 crc kubenswrapper[4772]: I1122 12:55:09.428014 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" path="/var/lib/kubelet/pods/4c6ad566-1c4c-4ebe-8509-dd3f67ea2041/volumes" Nov 22 12:55:19 crc kubenswrapper[4772]: I1122 12:55:19.601639 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" event={"ID":"4ab47679-8af5-427d-bac6-71a380ae0130","Type":"ContainerDied","Data":"6e7443d4e02f27a7e1c54b70a21ff27080d426f9c5bc6b1513a9043d612fd17c"} Nov 22 12:55:19 crc kubenswrapper[4772]: I1122 12:55:19.601544 4772 generic.go:334] "Generic (PLEG): container finished" podID="4ab47679-8af5-427d-bac6-71a380ae0130" containerID="6e7443d4e02f27a7e1c54b70a21ff27080d426f9c5bc6b1513a9043d612fd17c" exitCode=0 Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.123202 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.321419 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-0\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.321871 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv8pn\" (UniqueName: \"kubernetes.io/projected/4ab47679-8af5-427d-bac6-71a380ae0130-kube-api-access-zv8pn\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.321959 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-inventory\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322006 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-1\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322065 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-0\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322092 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ssh-key\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322154 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-1\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322187 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ceph\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322225 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-combined-ca-bundle\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322252 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-0\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.322299 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-1\") pod \"4ab47679-8af5-427d-bac6-71a380ae0130\" (UID: \"4ab47679-8af5-427d-bac6-71a380ae0130\") " Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.329541 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab47679-8af5-427d-bac6-71a380ae0130-kube-api-access-zv8pn" (OuterVolumeSpecName: "kube-api-access-zv8pn") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "kube-api-access-zv8pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.340598 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ceph" (OuterVolumeSpecName: "ceph") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.344152 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.356832 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.362008 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.365809 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.366157 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.376709 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.379839 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.380451 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-inventory" (OuterVolumeSpecName: "inventory") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.387924 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ab47679-8af5-427d-bac6-71a380ae0130" (UID: "4ab47679-8af5-427d-bac6-71a380ae0130"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425655 4772 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425699 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425712 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425725 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv8pn\" (UniqueName: \"kubernetes.io/projected/4ab47679-8af5-427d-bac6-71a380ae0130-kube-api-access-zv8pn\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425739 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425753 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425765 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425775 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425787 4772 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425798 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.425810 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab47679-8af5-427d-bac6-71a380ae0130-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.629388 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" event={"ID":"4ab47679-8af5-427d-bac6-71a380ae0130","Type":"ContainerDied","Data":"4e540d942a7c40c3923393b8dd52b4ae9ce2ce543a8e8b8201edf6d1d593727d"} Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.629434 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e540d942a7c40c3923393b8dd52b4ae9ce2ce543a8e8b8201edf6d1d593727d" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.629510 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rfh4k" Nov 22 12:55:21 crc kubenswrapper[4772]: E1122 12:55:21.656795 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ab47679_8af5_427d_bac6_71a380ae0130.slice\": RecentStats: unable to find data in memory cache]" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.753414 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hbpc7"] Nov 22 12:55:21 crc kubenswrapper[4772]: E1122 12:55:21.754027 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="registry-server" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.754057 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="registry-server" Nov 22 12:55:21 crc kubenswrapper[4772]: E1122 12:55:21.754102 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab47679-8af5-427d-bac6-71a380ae0130" containerName="nova-cell1-openstack-openstack-cell1" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.754109 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab47679-8af5-427d-bac6-71a380ae0130" containerName="nova-cell1-openstack-openstack-cell1" Nov 22 12:55:21 crc kubenswrapper[4772]: E1122 12:55:21.754130 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="extract-content" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.754136 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="extract-content" Nov 22 12:55:21 crc kubenswrapper[4772]: E1122 12:55:21.754155 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="extract-utilities" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.754162 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="extract-utilities" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.754402 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6ad566-1c4c-4ebe-8509-dd3f67ea2041" containerName="registry-server" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.754422 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab47679-8af5-427d-bac6-71a380ae0130" containerName="nova-cell1-openstack-openstack-cell1" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.755288 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.759840 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.760170 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.760185 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.760372 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.760648 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.775741 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hbpc7"] Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.936008 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceph\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.936442 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-inventory\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.936484 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7c5\" (UniqueName: \"kubernetes.io/projected/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-kube-api-access-gc7c5\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.936510 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.936566 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.936604 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.937289 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:21 crc kubenswrapper[4772]: I1122 12:55:21.937482 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.039659 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceph\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.039738 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-inventory\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.039783 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc7c5\" (UniqueName: \"kubernetes.io/projected/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-kube-api-access-gc7c5\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.039822 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.039895 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.039929 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.040008 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.040064 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.047422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.048542 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-inventory\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.048760 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceph\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.048982 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.050567 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.054100 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.061553 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.082884 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc7c5\" (UniqueName: \"kubernetes.io/projected/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-kube-api-access-gc7c5\") pod \"telemetry-openstack-openstack-cell1-hbpc7\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.084834 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:55:22 crc kubenswrapper[4772]: I1122 12:55:22.689720 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hbpc7"] Nov 22 12:55:23 crc kubenswrapper[4772]: I1122 12:55:23.654797 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" event={"ID":"e9a56275-d25a-4d0b-9d8c-20c46f7b200e","Type":"ContainerStarted","Data":"32d9d1720772ac48f14b599cfcfb077c807de36ff9aad417485b8852db207b5c"} Nov 22 12:55:23 crc kubenswrapper[4772]: I1122 12:55:23.655304 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" event={"ID":"e9a56275-d25a-4d0b-9d8c-20c46f7b200e","Type":"ContainerStarted","Data":"6f4af44ac250d3299433417dc365f9e1d28984155e2533bf8f4ed90acf343b23"} Nov 22 12:55:23 crc kubenswrapper[4772]: I1122 12:55:23.683484 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" podStartSLOduration=2.211956339 podStartE2EDuration="2.68346386s" podCreationTimestamp="2025-11-22 12:55:21 +0000 UTC" firstStartedPulling="2025-11-22 12:55:22.68345467 +0000 UTC m=+8242.922899154" lastFinishedPulling="2025-11-22 12:55:23.154962181 +0000 UTC m=+8243.394406675" observedRunningTime="2025-11-22 12:55:23.672262903 +0000 UTC m=+8243.911707437" watchObservedRunningTime="2025-11-22 12:55:23.68346386 +0000 UTC m=+8243.922908364" Nov 22 12:56:18 crc kubenswrapper[4772]: I1122 12:56:18.324093 4772 scope.go:117] "RemoveContainer" containerID="4d97169a8b264164d2bfa4a221b6116cab109ccb777d78090a6ca8d820c7ed7e" Nov 22 12:56:18 crc kubenswrapper[4772]: I1122 12:56:18.369323 4772 scope.go:117] "RemoveContainer" containerID="52e1e8486dfdf8e1b9a86c2c5306f46352e31fc280f8a4083dda83916de02544" Nov 22 12:56:18 crc kubenswrapper[4772]: I1122 12:56:18.399548 4772 scope.go:117] "RemoveContainer" containerID="6e7b8b00c3d327079428ecfb7cda635cc1179e6dd4b9080e21eef245803ecf9b" Nov 22 12:57:01 crc kubenswrapper[4772]: I1122 12:57:01.532645 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:57:01 crc kubenswrapper[4772]: I1122 12:57:01.533277 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:57:31 crc kubenswrapper[4772]: I1122 12:57:31.533001 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:57:31 crc kubenswrapper[4772]: I1122 12:57:31.533764 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:58:01 crc kubenswrapper[4772]: I1122 12:58:01.532890 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 12:58:01 crc kubenswrapper[4772]: I1122 12:58:01.533587 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 12:58:01 crc kubenswrapper[4772]: I1122 12:58:01.533653 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 12:58:01 crc kubenswrapper[4772]: I1122 12:58:01.534645 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3628acab72a50b4358fc44a519d84d0c10d69eb2dbf2ce2890f222362bc6e085"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 12:58:01 crc kubenswrapper[4772]: I1122 12:58:01.534722 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://3628acab72a50b4358fc44a519d84d0c10d69eb2dbf2ce2890f222362bc6e085" gracePeriod=600 Nov 22 12:58:02 crc kubenswrapper[4772]: I1122 12:58:02.517887 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="3628acab72a50b4358fc44a519d84d0c10d69eb2dbf2ce2890f222362bc6e085" exitCode=0 Nov 22 12:58:02 crc kubenswrapper[4772]: I1122 12:58:02.517974 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"3628acab72a50b4358fc44a519d84d0c10d69eb2dbf2ce2890f222362bc6e085"} Nov 22 12:58:02 crc kubenswrapper[4772]: I1122 12:58:02.518512 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff"} Nov 22 12:58:02 crc kubenswrapper[4772]: I1122 12:58:02.518537 4772 scope.go:117] "RemoveContainer" containerID="0951475fb87f959b566a9e347ebcff9c0640430af1a33fe41eff88eca50bed05" Nov 22 12:59:25 crc kubenswrapper[4772]: I1122 12:59:25.480090 4772 generic.go:334] "Generic (PLEG): container finished" podID="e9a56275-d25a-4d0b-9d8c-20c46f7b200e" containerID="32d9d1720772ac48f14b599cfcfb077c807de36ff9aad417485b8852db207b5c" exitCode=0 Nov 22 12:59:25 crc kubenswrapper[4772]: I1122 12:59:25.480169 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" event={"ID":"e9a56275-d25a-4d0b-9d8c-20c46f7b200e","Type":"ContainerDied","Data":"32d9d1720772ac48f14b599cfcfb077c807de36ff9aad417485b8852db207b5c"} Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:26.999657 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.030542 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-inventory\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.030843 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceph\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.031042 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-1\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.031170 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-telemetry-combined-ca-bundle\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.031269 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-2\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.031377 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc7c5\" (UniqueName: \"kubernetes.io/projected/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-kube-api-access-gc7c5\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.031447 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-0\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.031535 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ssh-key\") pod \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\" (UID: \"e9a56275-d25a-4d0b-9d8c-20c46f7b200e\") " Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.066746 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-kube-api-access-gc7c5" (OuterVolumeSpecName: "kube-api-access-gc7c5") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "kube-api-access-gc7c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.069641 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceph" (OuterVolumeSpecName: "ceph") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.071274 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.100654 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.105452 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.120518 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.124421 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.124439 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-inventory" (OuterVolumeSpecName: "inventory") pod "e9a56275-d25a-4d0b-9d8c-20c46f7b200e" (UID: "e9a56275-d25a-4d0b-9d8c-20c46f7b200e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136585 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136617 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136630 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136642 4772 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136655 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136665 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc7c5\" (UniqueName: \"kubernetes.io/projected/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-kube-api-access-gc7c5\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136674 4772 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.136682 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9a56275-d25a-4d0b-9d8c-20c46f7b200e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.509381 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" event={"ID":"e9a56275-d25a-4d0b-9d8c-20c46f7b200e","Type":"ContainerDied","Data":"6f4af44ac250d3299433417dc365f9e1d28984155e2533bf8f4ed90acf343b23"} Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.509468 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4af44ac250d3299433417dc365f9e1d28984155e2533bf8f4ed90acf343b23" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.509586 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hbpc7" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.620339 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-kmwfl"] Nov 22 12:59:27 crc kubenswrapper[4772]: E1122 12:59:27.621805 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a56275-d25a-4d0b-9d8c-20c46f7b200e" containerName="telemetry-openstack-openstack-cell1" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.621833 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a56275-d25a-4d0b-9d8c-20c46f7b200e" containerName="telemetry-openstack-openstack-cell1" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.622178 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a56275-d25a-4d0b-9d8c-20c46f7b200e" containerName="telemetry-openstack-openstack-cell1" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.623471 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.625837 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.626016 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.626262 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.626400 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.626800 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.632836 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-kmwfl"] Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.650938 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.651119 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpls\" (UniqueName: \"kubernetes.io/projected/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-kube-api-access-7hpls\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.651255 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.651285 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.651340 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.651371 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.756675 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.756744 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.756826 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.756860 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.756937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.757103 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hpls\" (UniqueName: \"kubernetes.io/projected/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-kube-api-access-7hpls\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.761981 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.762254 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.763284 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.763908 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.764587 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.779746 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hpls\" (UniqueName: \"kubernetes.io/projected/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-kube-api-access-7hpls\") pod \"neutron-sriov-openstack-openstack-cell1-kmwfl\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:27 crc kubenswrapper[4772]: I1122 12:59:27.951000 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 12:59:28 crc kubenswrapper[4772]: I1122 12:59:28.534375 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-kmwfl"] Nov 22 12:59:28 crc kubenswrapper[4772]: I1122 12:59:28.555145 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 12:59:29 crc kubenswrapper[4772]: I1122 12:59:29.530239 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" event={"ID":"5f0aff0b-9692-4d9c-a0be-12694f7e71f8","Type":"ContainerStarted","Data":"632646a7ea2fda4e4847af67898ac3aca2a2c88c182557ff7b506165f991aa98"} Nov 22 12:59:29 crc kubenswrapper[4772]: I1122 12:59:29.530871 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" event={"ID":"5f0aff0b-9692-4d9c-a0be-12694f7e71f8","Type":"ContainerStarted","Data":"68d0acf48a67439252a8306d159053f53b64c88c33dc7781334a9d28296a5766"} Nov 22 12:59:29 crc kubenswrapper[4772]: I1122 12:59:29.555764 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" podStartSLOduration=2.069473199 podStartE2EDuration="2.55574301s" podCreationTimestamp="2025-11-22 12:59:27 +0000 UTC" firstStartedPulling="2025-11-22 12:59:28.554873576 +0000 UTC m=+8488.794318070" lastFinishedPulling="2025-11-22 12:59:29.041143347 +0000 UTC m=+8489.280587881" observedRunningTime="2025-11-22 12:59:29.546174622 +0000 UTC m=+8489.785619126" watchObservedRunningTime="2025-11-22 12:59:29.55574301 +0000 UTC m=+8489.795187524" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.145064 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67"] Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.147184 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.150136 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.153989 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.159648 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67"] Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.246755 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-config-volume\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.247083 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drw2g\" (UniqueName: \"kubernetes.io/projected/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-kube-api-access-drw2g\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.247334 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-secret-volume\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.349160 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drw2g\" (UniqueName: \"kubernetes.io/projected/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-kube-api-access-drw2g\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.349264 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-secret-volume\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.349338 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-config-volume\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.350302 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-config-volume\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.364972 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-secret-volume\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.366261 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drw2g\" (UniqueName: \"kubernetes.io/projected/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-kube-api-access-drw2g\") pod \"collect-profiles-29396940-r7p67\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.474289 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.947809 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67"] Nov 22 13:00:00 crc kubenswrapper[4772]: I1122 13:00:00.964620 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" event={"ID":"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b","Type":"ContainerStarted","Data":"6bb0a76d3e5305a4badc5f6c87c975ead0f9cf53e9ad7caf825ad3784cf4d438"} Nov 22 13:00:01 crc kubenswrapper[4772]: I1122 13:00:01.536102 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:00:01 crc kubenswrapper[4772]: I1122 13:00:01.536405 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:00:01 crc kubenswrapper[4772]: I1122 13:00:01.975032 4772 generic.go:334] "Generic (PLEG): container finished" podID="3554c1d5-99a7-4350-a1bb-81ddf89c9a9b" containerID="ebf3111f328878520a2189654f3cf98f155876cda8344d38a7a95b52c42cb6f5" exitCode=0 Nov 22 13:00:01 crc kubenswrapper[4772]: I1122 13:00:01.975094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" event={"ID":"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b","Type":"ContainerDied","Data":"ebf3111f328878520a2189654f3cf98f155876cda8344d38a7a95b52c42cb6f5"} Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.441091 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.518878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-config-volume\") pod \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.519185 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-secret-volume\") pod \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.519243 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drw2g\" (UniqueName: \"kubernetes.io/projected/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-kube-api-access-drw2g\") pod \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\" (UID: \"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b\") " Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.519475 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3554c1d5-99a7-4350-a1bb-81ddf89c9a9b" (UID: "3554c1d5-99a7-4350-a1bb-81ddf89c9a9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.520483 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.524926 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-kube-api-access-drw2g" (OuterVolumeSpecName: "kube-api-access-drw2g") pod "3554c1d5-99a7-4350-a1bb-81ddf89c9a9b" (UID: "3554c1d5-99a7-4350-a1bb-81ddf89c9a9b"). InnerVolumeSpecName "kube-api-access-drw2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.531945 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3554c1d5-99a7-4350-a1bb-81ddf89c9a9b" (UID: "3554c1d5-99a7-4350-a1bb-81ddf89c9a9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.622630 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.622668 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drw2g\" (UniqueName: \"kubernetes.io/projected/3554c1d5-99a7-4350-a1bb-81ddf89c9a9b-kube-api-access-drw2g\") on node \"crc\" DevicePath \"\"" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.999094 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" event={"ID":"3554c1d5-99a7-4350-a1bb-81ddf89c9a9b","Type":"ContainerDied","Data":"6bb0a76d3e5305a4badc5f6c87c975ead0f9cf53e9ad7caf825ad3784cf4d438"} Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.999449 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb0a76d3e5305a4badc5f6c87c975ead0f9cf53e9ad7caf825ad3784cf4d438" Nov 22 13:00:03 crc kubenswrapper[4772]: I1122 13:00:03.999236 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396940-r7p67" Nov 22 13:00:04 crc kubenswrapper[4772]: I1122 13:00:04.543703 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9"] Nov 22 13:00:04 crc kubenswrapper[4772]: I1122 13:00:04.552552 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396895-pxxr9"] Nov 22 13:00:05 crc kubenswrapper[4772]: I1122 13:00:05.433376 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399bf4ec-b345-4e53-bb0d-7c0a90f3c12a" path="/var/lib/kubelet/pods/399bf4ec-b345-4e53-bb0d-7c0a90f3c12a/volumes" Nov 22 13:00:18 crc kubenswrapper[4772]: I1122 13:00:18.561588 4772 scope.go:117] "RemoveContainer" containerID="91e9a229fc0b51b699c0012cec088748cf6c5f57dcadde7f14ac58c91f760e59" Nov 22 13:00:31 crc kubenswrapper[4772]: I1122 13:00:31.533357 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:00:31 crc kubenswrapper[4772]: I1122 13:00:31.535637 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.030339 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p64zh"] Nov 22 13:00:50 crc kubenswrapper[4772]: E1122 13:00:50.031602 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3554c1d5-99a7-4350-a1bb-81ddf89c9a9b" containerName="collect-profiles" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.031620 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3554c1d5-99a7-4350-a1bb-81ddf89c9a9b" containerName="collect-profiles" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.031945 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3554c1d5-99a7-4350-a1bb-81ddf89c9a9b" containerName="collect-profiles" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.033874 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.054446 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p64zh"] Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.163847 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8q2j\" (UniqueName: \"kubernetes.io/projected/030bb6a5-c706-424e-bf13-69bc058e8e10-kube-api-access-g8q2j\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.164260 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-utilities\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.164392 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-catalog-content\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.266773 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-utilities\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.266861 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-catalog-content\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.266925 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8q2j\" (UniqueName: \"kubernetes.io/projected/030bb6a5-c706-424e-bf13-69bc058e8e10-kube-api-access-g8q2j\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.267556 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-utilities\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.267568 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-catalog-content\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.297694 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8q2j\" (UniqueName: \"kubernetes.io/projected/030bb6a5-c706-424e-bf13-69bc058e8e10-kube-api-access-g8q2j\") pod \"community-operators-p64zh\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.394300 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:00:50 crc kubenswrapper[4772]: I1122 13:00:50.992830 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p64zh"] Nov 22 13:00:51 crc kubenswrapper[4772]: I1122 13:00:51.557539 4772 generic.go:334] "Generic (PLEG): container finished" podID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerID="d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001" exitCode=0 Nov 22 13:00:51 crc kubenswrapper[4772]: I1122 13:00:51.557603 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p64zh" event={"ID":"030bb6a5-c706-424e-bf13-69bc058e8e10","Type":"ContainerDied","Data":"d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001"} Nov 22 13:00:51 crc kubenswrapper[4772]: I1122 13:00:51.557899 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p64zh" event={"ID":"030bb6a5-c706-424e-bf13-69bc058e8e10","Type":"ContainerStarted","Data":"f56d31ceabd50c1e0f2d772d12a2a675af2f4303300d9e031988f068dcf7a8dc"} Nov 22 13:00:53 crc kubenswrapper[4772]: I1122 13:00:53.616939 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p64zh" event={"ID":"030bb6a5-c706-424e-bf13-69bc058e8e10","Type":"ContainerStarted","Data":"371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2"} Nov 22 13:00:54 crc kubenswrapper[4772]: I1122 13:00:54.628437 4772 generic.go:334] "Generic (PLEG): container finished" podID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerID="371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2" exitCode=0 Nov 22 13:00:54 crc kubenswrapper[4772]: I1122 13:00:54.628523 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p64zh" event={"ID":"030bb6a5-c706-424e-bf13-69bc058e8e10","Type":"ContainerDied","Data":"371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2"} Nov 22 13:00:55 crc kubenswrapper[4772]: I1122 13:00:55.641656 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p64zh" event={"ID":"030bb6a5-c706-424e-bf13-69bc058e8e10","Type":"ContainerStarted","Data":"af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448"} Nov 22 13:00:56 crc kubenswrapper[4772]: I1122 13:00:56.674014 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p64zh" podStartSLOduration=3.2012004530000002 podStartE2EDuration="6.673987076s" podCreationTimestamp="2025-11-22 13:00:50 +0000 UTC" firstStartedPulling="2025-11-22 13:00:51.560334068 +0000 UTC m=+8571.799778562" lastFinishedPulling="2025-11-22 13:00:55.033120691 +0000 UTC m=+8575.272565185" observedRunningTime="2025-11-22 13:00:56.665348602 +0000 UTC m=+8576.904793096" watchObservedRunningTime="2025-11-22 13:00:56.673987076 +0000 UTC m=+8576.913431560" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.688201 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d85fz"] Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.694346 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.715338 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d85fz"] Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.850223 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wvr\" (UniqueName: \"kubernetes.io/projected/cd8209ac-fb6e-434d-a295-6298ee8f2dac-kube-api-access-s7wvr\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.850567 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-utilities\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.850698 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-catalog-content\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.953339 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wvr\" (UniqueName: \"kubernetes.io/projected/cd8209ac-fb6e-434d-a295-6298ee8f2dac-kube-api-access-s7wvr\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.953466 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-utilities\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.953507 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-catalog-content\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.954157 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-catalog-content\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.954216 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-utilities\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:00:59 crc kubenswrapper[4772]: I1122 13:00:59.979499 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wvr\" (UniqueName: \"kubernetes.io/projected/cd8209ac-fb6e-434d-a295-6298ee8f2dac-kube-api-access-s7wvr\") pod \"redhat-operators-d85fz\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.027508 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.174111 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29396941-tw2c7"] Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.176757 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.186261 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396941-tw2c7"] Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.262019 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-combined-ca-bundle\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.262142 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-config-data\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.262242 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chqh\" (UniqueName: \"kubernetes.io/projected/7e7dfe28-05c8-4e1a-b1f3-73589d504864-kube-api-access-4chqh\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.262279 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-fernet-keys\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.363572 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-combined-ca-bundle\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.363639 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-config-data\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.363737 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chqh\" (UniqueName: \"kubernetes.io/projected/7e7dfe28-05c8-4e1a-b1f3-73589d504864-kube-api-access-4chqh\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.363773 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-fernet-keys\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.372076 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-fernet-keys\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.372165 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-config-data\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.384757 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-combined-ca-bundle\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.385422 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chqh\" (UniqueName: \"kubernetes.io/projected/7e7dfe28-05c8-4e1a-b1f3-73589d504864-kube-api-access-4chqh\") pod \"keystone-cron-29396941-tw2c7\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.394723 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.394769 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.550426 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.614744 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d85fz"] Nov 22 13:01:00 crc kubenswrapper[4772]: W1122 13:01:00.622798 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8209ac_fb6e_434d_a295_6298ee8f2dac.slice/crio-1b5febf04dfaff2723c4baad2580eac939f93eee215056b6b184452911304f46 WatchSource:0}: Error finding container 1b5febf04dfaff2723c4baad2580eac939f93eee215056b6b184452911304f46: Status 404 returned error can't find the container with id 1b5febf04dfaff2723c4baad2580eac939f93eee215056b6b184452911304f46 Nov 22 13:01:00 crc kubenswrapper[4772]: I1122 13:01:00.715075 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d85fz" event={"ID":"cd8209ac-fb6e-434d-a295-6298ee8f2dac","Type":"ContainerStarted","Data":"1b5febf04dfaff2723c4baad2580eac939f93eee215056b6b184452911304f46"} Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.065550 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396941-tw2c7"] Nov 22 13:01:01 crc kubenswrapper[4772]: W1122 13:01:01.068584 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e7dfe28_05c8_4e1a_b1f3_73589d504864.slice/crio-8a117881d8de87ddc495f893964131428b74244a9391aace3f6c005babadfeac WatchSource:0}: Error finding container 8a117881d8de87ddc495f893964131428b74244a9391aace3f6c005babadfeac: Status 404 returned error can't find the container with id 8a117881d8de87ddc495f893964131428b74244a9391aace3f6c005babadfeac Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.478688 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p64zh" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="registry-server" probeResult="failure" output=< Nov 22 13:01:01 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 13:01:01 crc kubenswrapper[4772]: > Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.533221 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.533302 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.533362 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.534451 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.534598 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" gracePeriod=600 Nov 22 13:01:01 crc kubenswrapper[4772]: E1122 13:01:01.666103 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.728253 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396941-tw2c7" event={"ID":"7e7dfe28-05c8-4e1a-b1f3-73589d504864","Type":"ContainerStarted","Data":"308079739ff33d7db9956b4c83f0b5596cca65c46091c015d4d64f5304c74d0b"} Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.728331 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396941-tw2c7" event={"ID":"7e7dfe28-05c8-4e1a-b1f3-73589d504864","Type":"ContainerStarted","Data":"8a117881d8de87ddc495f893964131428b74244a9391aace3f6c005babadfeac"} Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.731040 4772 generic.go:334] "Generic (PLEG): container finished" podID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerID="c75d18a00141d6126d52ff2dfaccaceb6a7f0a44caf491c243c505e5600aad48" exitCode=0 Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.731127 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d85fz" event={"ID":"cd8209ac-fb6e-434d-a295-6298ee8f2dac","Type":"ContainerDied","Data":"c75d18a00141d6126d52ff2dfaccaceb6a7f0a44caf491c243c505e5600aad48"} Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.739331 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" exitCode=0 Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.739461 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff"} Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.739556 4772 scope.go:117] "RemoveContainer" containerID="3628acab72a50b4358fc44a519d84d0c10d69eb2dbf2ce2890f222362bc6e085" Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.740528 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:01:01 crc kubenswrapper[4772]: E1122 13:01:01.740964 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:01:01 crc kubenswrapper[4772]: I1122 13:01:01.759292 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29396941-tw2c7" podStartSLOduration=1.759266989 podStartE2EDuration="1.759266989s" podCreationTimestamp="2025-11-22 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 13:01:01.748183144 +0000 UTC m=+8581.987627648" watchObservedRunningTime="2025-11-22 13:01:01.759266989 +0000 UTC m=+8581.998711493" Nov 22 13:01:01 crc kubenswrapper[4772]: E1122 13:01:01.819982 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-conmon-5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff.scope\": RecentStats: unable to find data in memory cache]" Nov 22 13:01:02 crc kubenswrapper[4772]: I1122 13:01:02.758958 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d85fz" event={"ID":"cd8209ac-fb6e-434d-a295-6298ee8f2dac","Type":"ContainerStarted","Data":"51d878d829cd05a1011fee8dbd2e699a7a8a7ff6747bdc59069c47c7828c0530"} Nov 22 13:01:06 crc kubenswrapper[4772]: I1122 13:01:06.844110 4772 generic.go:334] "Generic (PLEG): container finished" podID="7e7dfe28-05c8-4e1a-b1f3-73589d504864" containerID="308079739ff33d7db9956b4c83f0b5596cca65c46091c015d4d64f5304c74d0b" exitCode=0 Nov 22 13:01:06 crc kubenswrapper[4772]: I1122 13:01:06.844242 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396941-tw2c7" event={"ID":"7e7dfe28-05c8-4e1a-b1f3-73589d504864","Type":"ContainerDied","Data":"308079739ff33d7db9956b4c83f0b5596cca65c46091c015d4d64f5304c74d0b"} Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.248941 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.392652 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-fernet-keys\") pod \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.392854 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chqh\" (UniqueName: \"kubernetes.io/projected/7e7dfe28-05c8-4e1a-b1f3-73589d504864-kube-api-access-4chqh\") pod \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.392995 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-combined-ca-bundle\") pod \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.393039 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-config-data\") pod \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\" (UID: \"7e7dfe28-05c8-4e1a-b1f3-73589d504864\") " Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.398222 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7dfe28-05c8-4e1a-b1f3-73589d504864-kube-api-access-4chqh" (OuterVolumeSpecName: "kube-api-access-4chqh") pod "7e7dfe28-05c8-4e1a-b1f3-73589d504864" (UID: "7e7dfe28-05c8-4e1a-b1f3-73589d504864"). InnerVolumeSpecName "kube-api-access-4chqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.399223 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7e7dfe28-05c8-4e1a-b1f3-73589d504864" (UID: "7e7dfe28-05c8-4e1a-b1f3-73589d504864"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.430206 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e7dfe28-05c8-4e1a-b1f3-73589d504864" (UID: "7e7dfe28-05c8-4e1a-b1f3-73589d504864"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.458860 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-config-data" (OuterVolumeSpecName: "config-data") pod "7e7dfe28-05c8-4e1a-b1f3-73589d504864" (UID: "7e7dfe28-05c8-4e1a-b1f3-73589d504864"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.496871 4772 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.496929 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chqh\" (UniqueName: \"kubernetes.io/projected/7e7dfe28-05c8-4e1a-b1f3-73589d504864-kube-api-access-4chqh\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.496942 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.496954 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7dfe28-05c8-4e1a-b1f3-73589d504864-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.873675 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396941-tw2c7" event={"ID":"7e7dfe28-05c8-4e1a-b1f3-73589d504864","Type":"ContainerDied","Data":"8a117881d8de87ddc495f893964131428b74244a9391aace3f6c005babadfeac"} Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.873714 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a117881d8de87ddc495f893964131428b74244a9391aace3f6c005babadfeac" Nov 22 13:01:08 crc kubenswrapper[4772]: I1122 13:01:08.873817 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396941-tw2c7" Nov 22 13:01:09 crc kubenswrapper[4772]: I1122 13:01:09.887649 4772 generic.go:334] "Generic (PLEG): container finished" podID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerID="51d878d829cd05a1011fee8dbd2e699a7a8a7ff6747bdc59069c47c7828c0530" exitCode=0 Nov 22 13:01:09 crc kubenswrapper[4772]: I1122 13:01:09.887736 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d85fz" event={"ID":"cd8209ac-fb6e-434d-a295-6298ee8f2dac","Type":"ContainerDied","Data":"51d878d829cd05a1011fee8dbd2e699a7a8a7ff6747bdc59069c47c7828c0530"} Nov 22 13:01:10 crc kubenswrapper[4772]: I1122 13:01:10.452753 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:01:10 crc kubenswrapper[4772]: I1122 13:01:10.552674 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:01:10 crc kubenswrapper[4772]: I1122 13:01:10.906843 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d85fz" event={"ID":"cd8209ac-fb6e-434d-a295-6298ee8f2dac","Type":"ContainerStarted","Data":"b4ef1ec69573161f64b501119e1a7850be2ec2476e09f9b4c08e0bf2b0a8d50c"} Nov 22 13:01:10 crc kubenswrapper[4772]: I1122 13:01:10.927630 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d85fz" podStartSLOduration=3.353064725 podStartE2EDuration="11.927612409s" podCreationTimestamp="2025-11-22 13:00:59 +0000 UTC" firstStartedPulling="2025-11-22 13:01:01.733035389 +0000 UTC m=+8581.972479883" lastFinishedPulling="2025-11-22 13:01:10.307583073 +0000 UTC m=+8590.547027567" observedRunningTime="2025-11-22 13:01:10.926853991 +0000 UTC m=+8591.166298485" watchObservedRunningTime="2025-11-22 13:01:10.927612409 +0000 UTC m=+8591.167056903" Nov 22 13:01:11 crc kubenswrapper[4772]: I1122 13:01:11.130606 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p64zh"] Nov 22 13:01:11 crc kubenswrapper[4772]: I1122 13:01:11.916754 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p64zh" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="registry-server" containerID="cri-o://af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448" gracePeriod=2 Nov 22 13:01:12 crc kubenswrapper[4772]: E1122 13:01:12.161810 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod030bb6a5_c706_424e_bf13_69bc058e8e10.slice/crio-conmon-af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod030bb6a5_c706_424e_bf13_69bc058e8e10.slice/crio-af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448.scope\": RecentStats: unable to find data in memory cache]" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.459107 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.593512 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-utilities\") pod \"030bb6a5-c706-424e-bf13-69bc058e8e10\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.593664 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8q2j\" (UniqueName: \"kubernetes.io/projected/030bb6a5-c706-424e-bf13-69bc058e8e10-kube-api-access-g8q2j\") pod \"030bb6a5-c706-424e-bf13-69bc058e8e10\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.593800 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-catalog-content\") pod \"030bb6a5-c706-424e-bf13-69bc058e8e10\" (UID: \"030bb6a5-c706-424e-bf13-69bc058e8e10\") " Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.595167 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-utilities" (OuterVolumeSpecName: "utilities") pod "030bb6a5-c706-424e-bf13-69bc058e8e10" (UID: "030bb6a5-c706-424e-bf13-69bc058e8e10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.595597 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.605203 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030bb6a5-c706-424e-bf13-69bc058e8e10-kube-api-access-g8q2j" (OuterVolumeSpecName: "kube-api-access-g8q2j") pod "030bb6a5-c706-424e-bf13-69bc058e8e10" (UID: "030bb6a5-c706-424e-bf13-69bc058e8e10"). InnerVolumeSpecName "kube-api-access-g8q2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.646273 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "030bb6a5-c706-424e-bf13-69bc058e8e10" (UID: "030bb6a5-c706-424e-bf13-69bc058e8e10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.697803 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030bb6a5-c706-424e-bf13-69bc058e8e10-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.697846 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8q2j\" (UniqueName: \"kubernetes.io/projected/030bb6a5-c706-424e-bf13-69bc058e8e10-kube-api-access-g8q2j\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.928063 4772 generic.go:334] "Generic (PLEG): container finished" podID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerID="af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448" exitCode=0 Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.928110 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p64zh" event={"ID":"030bb6a5-c706-424e-bf13-69bc058e8e10","Type":"ContainerDied","Data":"af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448"} Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.928139 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p64zh" event={"ID":"030bb6a5-c706-424e-bf13-69bc058e8e10","Type":"ContainerDied","Data":"f56d31ceabd50c1e0f2d772d12a2a675af2f4303300d9e031988f068dcf7a8dc"} Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.928150 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p64zh" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.928157 4772 scope.go:117] "RemoveContainer" containerID="af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.970192 4772 scope.go:117] "RemoveContainer" containerID="371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2" Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.978486 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p64zh"] Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.992405 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p64zh"] Nov 22 13:01:12 crc kubenswrapper[4772]: I1122 13:01:12.999313 4772 scope.go:117] "RemoveContainer" containerID="d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001" Nov 22 13:01:13 crc kubenswrapper[4772]: I1122 13:01:13.047982 4772 scope.go:117] "RemoveContainer" containerID="af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448" Nov 22 13:01:13 crc kubenswrapper[4772]: E1122 13:01:13.048571 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448\": container with ID starting with af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448 not found: ID does not exist" containerID="af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448" Nov 22 13:01:13 crc kubenswrapper[4772]: I1122 13:01:13.048615 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448"} err="failed to get container status \"af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448\": rpc error: code = NotFound desc = could not find container \"af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448\": container with ID starting with af7bb39b05f470cc2be239bf5d4987991d74b694728c3773914512a95cba6448 not found: ID does not exist" Nov 22 13:01:13 crc kubenswrapper[4772]: I1122 13:01:13.048643 4772 scope.go:117] "RemoveContainer" containerID="371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2" Nov 22 13:01:13 crc kubenswrapper[4772]: E1122 13:01:13.049188 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2\": container with ID starting with 371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2 not found: ID does not exist" containerID="371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2" Nov 22 13:01:13 crc kubenswrapper[4772]: I1122 13:01:13.049237 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2"} err="failed to get container status \"371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2\": rpc error: code = NotFound desc = could not find container \"371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2\": container with ID starting with 371019aec85a6b719721b483ca8bd1a5fe0392d4a7aae7627b5f3cc340ebe3d2 not found: ID does not exist" Nov 22 13:01:13 crc kubenswrapper[4772]: I1122 13:01:13.049272 4772 scope.go:117] "RemoveContainer" containerID="d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001" Nov 22 13:01:13 crc kubenswrapper[4772]: E1122 13:01:13.049634 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001\": container with ID starting with d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001 not found: ID does not exist" containerID="d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001" Nov 22 13:01:13 crc kubenswrapper[4772]: I1122 13:01:13.049676 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001"} err="failed to get container status \"d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001\": rpc error: code = NotFound desc = could not find container \"d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001\": container with ID starting with d55234dae80c459926f93f77ccffaab899f7072f1283f6141b14de38c72da001 not found: ID does not exist" Nov 22 13:01:13 crc kubenswrapper[4772]: I1122 13:01:13.429146 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" path="/var/lib/kubelet/pods/030bb6a5-c706-424e-bf13-69bc058e8e10/volumes" Nov 22 13:01:16 crc kubenswrapper[4772]: I1122 13:01:16.414354 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:01:16 crc kubenswrapper[4772]: E1122 13:01:16.415780 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:01:20 crc kubenswrapper[4772]: I1122 13:01:20.028462 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:20 crc kubenswrapper[4772]: I1122 13:01:20.029858 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:21 crc kubenswrapper[4772]: I1122 13:01:21.091130 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d85fz" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="registry-server" probeResult="failure" output=< Nov 22 13:01:21 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 13:01:21 crc kubenswrapper[4772]: > Nov 22 13:01:22 crc kubenswrapper[4772]: E1122 13:01:22.479014 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff.scope\": RecentStats: unable to find data in memory cache]" Nov 22 13:01:29 crc kubenswrapper[4772]: I1122 13:01:29.415127 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:01:29 crc kubenswrapper[4772]: E1122 13:01:29.416955 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:01:31 crc kubenswrapper[4772]: I1122 13:01:31.087665 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d85fz" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="registry-server" probeResult="failure" output=< Nov 22 13:01:31 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 13:01:31 crc kubenswrapper[4772]: > Nov 22 13:01:32 crc kubenswrapper[4772]: E1122 13:01:32.870979 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff.scope\": RecentStats: unable to find data in memory cache]" Nov 22 13:01:40 crc kubenswrapper[4772]: I1122 13:01:40.096307 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:40 crc kubenswrapper[4772]: I1122 13:01:40.163845 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:40 crc kubenswrapper[4772]: I1122 13:01:40.360777 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d85fz"] Nov 22 13:01:40 crc kubenswrapper[4772]: I1122 13:01:40.415268 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:01:40 crc kubenswrapper[4772]: E1122 13:01:40.415545 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:01:41 crc kubenswrapper[4772]: I1122 13:01:41.364974 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d85fz" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="registry-server" containerID="cri-o://b4ef1ec69573161f64b501119e1a7850be2ec2476e09f9b4c08e0bf2b0a8d50c" gracePeriod=2 Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.385581 4772 generic.go:334] "Generic (PLEG): container finished" podID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerID="b4ef1ec69573161f64b501119e1a7850be2ec2476e09f9b4c08e0bf2b0a8d50c" exitCode=0 Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.385669 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d85fz" event={"ID":"cd8209ac-fb6e-434d-a295-6298ee8f2dac","Type":"ContainerDied","Data":"b4ef1ec69573161f64b501119e1a7850be2ec2476e09f9b4c08e0bf2b0a8d50c"} Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.386074 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d85fz" event={"ID":"cd8209ac-fb6e-434d-a295-6298ee8f2dac","Type":"ContainerDied","Data":"1b5febf04dfaff2723c4baad2580eac939f93eee215056b6b184452911304f46"} Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.386094 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5febf04dfaff2723c4baad2580eac939f93eee215056b6b184452911304f46" Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.463431 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.548933 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-catalog-content\") pod \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.549234 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-utilities\") pod \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.550290 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-utilities" (OuterVolumeSpecName: "utilities") pod "cd8209ac-fb6e-434d-a295-6298ee8f2dac" (UID: "cd8209ac-fb6e-434d-a295-6298ee8f2dac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.550559 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wvr\" (UniqueName: \"kubernetes.io/projected/cd8209ac-fb6e-434d-a295-6298ee8f2dac-kube-api-access-s7wvr\") pod \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\" (UID: \"cd8209ac-fb6e-434d-a295-6298ee8f2dac\") " Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.552287 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.562295 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8209ac-fb6e-434d-a295-6298ee8f2dac-kube-api-access-s7wvr" (OuterVolumeSpecName: "kube-api-access-s7wvr") pod "cd8209ac-fb6e-434d-a295-6298ee8f2dac" (UID: "cd8209ac-fb6e-434d-a295-6298ee8f2dac"). InnerVolumeSpecName "kube-api-access-s7wvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.655120 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7wvr\" (UniqueName: \"kubernetes.io/projected/cd8209ac-fb6e-434d-a295-6298ee8f2dac-kube-api-access-s7wvr\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.669017 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd8209ac-fb6e-434d-a295-6298ee8f2dac" (UID: "cd8209ac-fb6e-434d-a295-6298ee8f2dac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:01:42 crc kubenswrapper[4772]: I1122 13:01:42.762218 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8209ac-fb6e-434d-a295-6298ee8f2dac-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:01:43 crc kubenswrapper[4772]: E1122 13:01:43.177481 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff.scope\": RecentStats: unable to find data in memory cache]" Nov 22 13:01:43 crc kubenswrapper[4772]: I1122 13:01:43.395940 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d85fz" Nov 22 13:01:43 crc kubenswrapper[4772]: I1122 13:01:43.439442 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d85fz"] Nov 22 13:01:43 crc kubenswrapper[4772]: I1122 13:01:43.449821 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d85fz"] Nov 22 13:01:45 crc kubenswrapper[4772]: I1122 13:01:45.438012 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" path="/var/lib/kubelet/pods/cd8209ac-fb6e-434d-a295-6298ee8f2dac/volumes" Nov 22 13:01:51 crc kubenswrapper[4772]: I1122 13:01:51.422756 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:01:51 crc kubenswrapper[4772]: E1122 13:01:51.427635 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:01:53 crc kubenswrapper[4772]: E1122 13:01:53.456404 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386c238_461f_4956_940f_ac3c26eb052e.slice/crio-5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff.scope\": RecentStats: unable to find data in memory cache]" Nov 22 13:02:05 crc kubenswrapper[4772]: I1122 13:02:05.414955 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:02:05 crc kubenswrapper[4772]: E1122 13:02:05.416207 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:02:19 crc kubenswrapper[4772]: I1122 13:02:19.419470 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:02:19 crc kubenswrapper[4772]: E1122 13:02:19.420717 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:02:32 crc kubenswrapper[4772]: I1122 13:02:32.414221 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:02:32 crc kubenswrapper[4772]: E1122 13:02:32.415843 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:02:43 crc kubenswrapper[4772]: I1122 13:02:43.413844 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:02:43 crc kubenswrapper[4772]: E1122 13:02:43.414779 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:02:52 crc kubenswrapper[4772]: I1122 13:02:52.275249 4772 generic.go:334] "Generic (PLEG): container finished" podID="5f0aff0b-9692-4d9c-a0be-12694f7e71f8" containerID="632646a7ea2fda4e4847af67898ac3aca2a2c88c182557ff7b506165f991aa98" exitCode=0 Nov 22 13:02:52 crc kubenswrapper[4772]: I1122 13:02:52.275891 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" event={"ID":"5f0aff0b-9692-4d9c-a0be-12694f7e71f8","Type":"ContainerDied","Data":"632646a7ea2fda4e4847af67898ac3aca2a2c88c182557ff7b506165f991aa98"} Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.786313 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.912250 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-inventory\") pod \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.912323 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ssh-key\") pod \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.912675 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hpls\" (UniqueName: \"kubernetes.io/projected/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-kube-api-access-7hpls\") pod \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.912762 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-combined-ca-bundle\") pod \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.912843 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ceph\") pod \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.912893 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-agent-neutron-config-0\") pod \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\" (UID: \"5f0aff0b-9692-4d9c-a0be-12694f7e71f8\") " Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.918298 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-kube-api-access-7hpls" (OuterVolumeSpecName: "kube-api-access-7hpls") pod "5f0aff0b-9692-4d9c-a0be-12694f7e71f8" (UID: "5f0aff0b-9692-4d9c-a0be-12694f7e71f8"). InnerVolumeSpecName "kube-api-access-7hpls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.918630 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "5f0aff0b-9692-4d9c-a0be-12694f7e71f8" (UID: "5f0aff0b-9692-4d9c-a0be-12694f7e71f8"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.918835 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ceph" (OuterVolumeSpecName: "ceph") pod "5f0aff0b-9692-4d9c-a0be-12694f7e71f8" (UID: "5f0aff0b-9692-4d9c-a0be-12694f7e71f8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.944530 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-inventory" (OuterVolumeSpecName: "inventory") pod "5f0aff0b-9692-4d9c-a0be-12694f7e71f8" (UID: "5f0aff0b-9692-4d9c-a0be-12694f7e71f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.953466 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "5f0aff0b-9692-4d9c-a0be-12694f7e71f8" (UID: "5f0aff0b-9692-4d9c-a0be-12694f7e71f8"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:02:53 crc kubenswrapper[4772]: I1122 13:02:53.955792 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f0aff0b-9692-4d9c-a0be-12694f7e71f8" (UID: "5f0aff0b-9692-4d9c-a0be-12694f7e71f8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.016370 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.016410 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hpls\" (UniqueName: \"kubernetes.io/projected/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-kube-api-access-7hpls\") on node \"crc\" DevicePath \"\"" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.016427 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.016441 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.016453 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.016466 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f0aff0b-9692-4d9c-a0be-12694f7e71f8-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.302860 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" event={"ID":"5f0aff0b-9692-4d9c-a0be-12694f7e71f8","Type":"ContainerDied","Data":"68d0acf48a67439252a8306d159053f53b64c88c33dc7781334a9d28296a5766"} Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.302902 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d0acf48a67439252a8306d159053f53b64c88c33dc7781334a9d28296a5766" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.302926 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kmwfl" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412117 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt"] Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412554 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7dfe28-05c8-4e1a-b1f3-73589d504864" containerName="keystone-cron" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412586 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7dfe28-05c8-4e1a-b1f3-73589d504864" containerName="keystone-cron" Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412611 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="registry-server" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412617 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="registry-server" Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412630 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0aff0b-9692-4d9c-a0be-12694f7e71f8" containerName="neutron-sriov-openstack-openstack-cell1" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412638 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0aff0b-9692-4d9c-a0be-12694f7e71f8" containerName="neutron-sriov-openstack-openstack-cell1" Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412648 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="extract-content" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412654 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="extract-content" Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412673 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="registry-server" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412679 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="registry-server" Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412698 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="extract-content" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412703 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="extract-content" Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412713 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="extract-utilities" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412719 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="extract-utilities" Nov 22 13:02:54 crc kubenswrapper[4772]: E1122 13:02:54.412738 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="extract-utilities" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412744 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="extract-utilities" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412940 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7dfe28-05c8-4e1a-b1f3-73589d504864" containerName="keystone-cron" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412961 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f0aff0b-9692-4d9c-a0be-12694f7e71f8" containerName="neutron-sriov-openstack-openstack-cell1" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412975 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8209ac-fb6e-434d-a295-6298ee8f2dac" containerName="registry-server" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.412987 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="030bb6a5-c706-424e-bf13-69bc058e8e10" containerName="registry-server" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.414022 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.418639 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.419137 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.419740 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.420359 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.427500 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.436892 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt"] Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.528591 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjsr\" (UniqueName: \"kubernetes.io/projected/de658c90-11c4-4861-b1df-5dfb7da0bdf0-kube-api-access-4fjsr\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.529280 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.529976 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.530090 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.530225 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.530556 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.632201 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.632242 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.632274 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.632329 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.632365 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjsr\" (UniqueName: \"kubernetes.io/projected/de658c90-11c4-4861-b1df-5dfb7da0bdf0-kube-api-access-4fjsr\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.632388 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.649338 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.649338 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.649916 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.650304 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.651512 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjsr\" (UniqueName: \"kubernetes.io/projected/de658c90-11c4-4861-b1df-5dfb7da0bdf0-kube-api-access-4fjsr\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.656468 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-jwgtt\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:54 crc kubenswrapper[4772]: I1122 13:02:54.734544 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:02:55 crc kubenswrapper[4772]: I1122 13:02:55.307427 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt"] Nov 22 13:02:56 crc kubenswrapper[4772]: I1122 13:02:56.333997 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" event={"ID":"de658c90-11c4-4861-b1df-5dfb7da0bdf0","Type":"ContainerStarted","Data":"d09010202b300f1ff85313a66d41a3bb385cf2b4e41dee93d35c02bd5556a51c"} Nov 22 13:02:56 crc kubenswrapper[4772]: I1122 13:02:56.334481 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" event={"ID":"de658c90-11c4-4861-b1df-5dfb7da0bdf0","Type":"ContainerStarted","Data":"c26055bc5bd6c1135718231d72069b12770c93f05f414b2e558f9a5b2c54fd9b"} Nov 22 13:02:56 crc kubenswrapper[4772]: I1122 13:02:56.367322 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" podStartSLOduration=1.736391699 podStartE2EDuration="2.367304485s" podCreationTimestamp="2025-11-22 13:02:54 +0000 UTC" firstStartedPulling="2025-11-22 13:02:55.315513709 +0000 UTC m=+8695.554958203" lastFinishedPulling="2025-11-22 13:02:55.946426495 +0000 UTC m=+8696.185870989" observedRunningTime="2025-11-22 13:02:56.359304497 +0000 UTC m=+8696.598748991" watchObservedRunningTime="2025-11-22 13:02:56.367304485 +0000 UTC m=+8696.606748979" Nov 22 13:02:56 crc kubenswrapper[4772]: I1122 13:02:56.413005 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:02:56 crc kubenswrapper[4772]: E1122 13:02:56.413484 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:03:09 crc kubenswrapper[4772]: I1122 13:03:09.414528 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:03:09 crc kubenswrapper[4772]: E1122 13:03:09.415389 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:03:24 crc kubenswrapper[4772]: I1122 13:03:24.414414 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:03:24 crc kubenswrapper[4772]: E1122 13:03:24.415504 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:03:39 crc kubenswrapper[4772]: I1122 13:03:39.413605 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:03:39 crc kubenswrapper[4772]: E1122 13:03:39.414388 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:03:52 crc kubenswrapper[4772]: I1122 13:03:52.414924 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:03:52 crc kubenswrapper[4772]: E1122 13:03:52.416449 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:04:06 crc kubenswrapper[4772]: I1122 13:04:06.414360 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:04:06 crc kubenswrapper[4772]: E1122 13:04:06.415507 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:04:18 crc kubenswrapper[4772]: I1122 13:04:18.414755 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:04:18 crc kubenswrapper[4772]: E1122 13:04:18.415858 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:04:31 crc kubenswrapper[4772]: I1122 13:04:31.421131 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:04:31 crc kubenswrapper[4772]: E1122 13:04:31.422008 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:04:46 crc kubenswrapper[4772]: I1122 13:04:46.417695 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:04:46 crc kubenswrapper[4772]: E1122 13:04:46.418771 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:05:01 crc kubenswrapper[4772]: I1122 13:05:01.447691 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:05:01 crc kubenswrapper[4772]: E1122 13:05:01.448869 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.666252 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9z42"] Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.670018 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.695844 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9z42"] Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.871491 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wct9\" (UniqueName: \"kubernetes.io/projected/f55b45d0-52f9-4272-a15e-b87d0211c161-kube-api-access-2wct9\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.871563 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-utilities\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.872462 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-catalog-content\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.976608 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wct9\" (UniqueName: \"kubernetes.io/projected/f55b45d0-52f9-4272-a15e-b87d0211c161-kube-api-access-2wct9\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.976733 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-utilities\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.976796 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-catalog-content\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.977546 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-utilities\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:07 crc kubenswrapper[4772]: I1122 13:05:07.977864 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-catalog-content\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:08 crc kubenswrapper[4772]: I1122 13:05:08.012411 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wct9\" (UniqueName: \"kubernetes.io/projected/f55b45d0-52f9-4272-a15e-b87d0211c161-kube-api-access-2wct9\") pod \"redhat-marketplace-m9z42\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:08 crc kubenswrapper[4772]: I1122 13:05:08.309814 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:08 crc kubenswrapper[4772]: I1122 13:05:08.856839 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9z42"] Nov 22 13:05:09 crc kubenswrapper[4772]: I1122 13:05:09.201517 4772 generic.go:334] "Generic (PLEG): container finished" podID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerID="120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90" exitCode=0 Nov 22 13:05:09 crc kubenswrapper[4772]: I1122 13:05:09.201614 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9z42" event={"ID":"f55b45d0-52f9-4272-a15e-b87d0211c161","Type":"ContainerDied","Data":"120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90"} Nov 22 13:05:09 crc kubenswrapper[4772]: I1122 13:05:09.202140 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9z42" event={"ID":"f55b45d0-52f9-4272-a15e-b87d0211c161","Type":"ContainerStarted","Data":"add303d48a30f14e7e6836b063fe64990f03ccf8e8de86f49f7ed899ab5d672c"} Nov 22 13:05:09 crc kubenswrapper[4772]: I1122 13:05:09.204458 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 13:05:10 crc kubenswrapper[4772]: I1122 13:05:10.226347 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9z42" event={"ID":"f55b45d0-52f9-4272-a15e-b87d0211c161","Type":"ContainerStarted","Data":"55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29"} Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.033087 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lqmbg"] Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.041324 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.048509 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqmbg"] Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.189210 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-catalog-content\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.189494 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g76tz\" (UniqueName: \"kubernetes.io/projected/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-kube-api-access-g76tz\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.189703 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-utilities\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.236930 4772 generic.go:334] "Generic (PLEG): container finished" podID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerID="55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29" exitCode=0 Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.237020 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9z42" event={"ID":"f55b45d0-52f9-4272-a15e-b87d0211c161","Type":"ContainerDied","Data":"55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29"} Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.292133 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-catalog-content\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.292226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g76tz\" (UniqueName: \"kubernetes.io/projected/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-kube-api-access-g76tz\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.292329 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-utilities\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.292904 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-catalog-content\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.292963 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-utilities\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.326481 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g76tz\" (UniqueName: \"kubernetes.io/projected/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-kube-api-access-g76tz\") pod \"certified-operators-lqmbg\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.368034 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:11 crc kubenswrapper[4772]: I1122 13:05:11.900458 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqmbg"] Nov 22 13:05:11 crc kubenswrapper[4772]: W1122 13:05:11.907674 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45bf1e2b_2ae0_406e_b72d_9f059a68ecab.slice/crio-4d501081b13380418e0c617b90868aa25bb15c207a267f6287b56e0b53a912d1 WatchSource:0}: Error finding container 4d501081b13380418e0c617b90868aa25bb15c207a267f6287b56e0b53a912d1: Status 404 returned error can't find the container with id 4d501081b13380418e0c617b90868aa25bb15c207a267f6287b56e0b53a912d1 Nov 22 13:05:12 crc kubenswrapper[4772]: I1122 13:05:12.252754 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9z42" event={"ID":"f55b45d0-52f9-4272-a15e-b87d0211c161","Type":"ContainerStarted","Data":"9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa"} Nov 22 13:05:12 crc kubenswrapper[4772]: I1122 13:05:12.255833 4772 generic.go:334] "Generic (PLEG): container finished" podID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerID="263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e" exitCode=0 Nov 22 13:05:12 crc kubenswrapper[4772]: I1122 13:05:12.255872 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqmbg" event={"ID":"45bf1e2b-2ae0-406e-b72d-9f059a68ecab","Type":"ContainerDied","Data":"263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e"} Nov 22 13:05:12 crc kubenswrapper[4772]: I1122 13:05:12.255894 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqmbg" event={"ID":"45bf1e2b-2ae0-406e-b72d-9f059a68ecab","Type":"ContainerStarted","Data":"4d501081b13380418e0c617b90868aa25bb15c207a267f6287b56e0b53a912d1"} Nov 22 13:05:12 crc kubenswrapper[4772]: I1122 13:05:12.286237 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9z42" podStartSLOduration=2.731469898 podStartE2EDuration="5.286203329s" podCreationTimestamp="2025-11-22 13:05:07 +0000 UTC" firstStartedPulling="2025-11-22 13:05:09.2041492 +0000 UTC m=+8829.443593694" lastFinishedPulling="2025-11-22 13:05:11.758882631 +0000 UTC m=+8831.998327125" observedRunningTime="2025-11-22 13:05:12.270711865 +0000 UTC m=+8832.510156369" watchObservedRunningTime="2025-11-22 13:05:12.286203329 +0000 UTC m=+8832.525647833" Nov 22 13:05:13 crc kubenswrapper[4772]: I1122 13:05:13.278321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqmbg" event={"ID":"45bf1e2b-2ae0-406e-b72d-9f059a68ecab","Type":"ContainerStarted","Data":"a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3"} Nov 22 13:05:14 crc kubenswrapper[4772]: I1122 13:05:14.413567 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:05:14 crc kubenswrapper[4772]: E1122 13:05:14.414279 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:05:15 crc kubenswrapper[4772]: I1122 13:05:15.306460 4772 generic.go:334] "Generic (PLEG): container finished" podID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerID="a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3" exitCode=0 Nov 22 13:05:15 crc kubenswrapper[4772]: I1122 13:05:15.306514 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqmbg" event={"ID":"45bf1e2b-2ae0-406e-b72d-9f059a68ecab","Type":"ContainerDied","Data":"a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3"} Nov 22 13:05:16 crc kubenswrapper[4772]: I1122 13:05:16.320815 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqmbg" event={"ID":"45bf1e2b-2ae0-406e-b72d-9f059a68ecab","Type":"ContainerStarted","Data":"d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022"} Nov 22 13:05:16 crc kubenswrapper[4772]: I1122 13:05:16.353994 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lqmbg" podStartSLOduration=1.7720148390000001 podStartE2EDuration="5.353961217s" podCreationTimestamp="2025-11-22 13:05:11 +0000 UTC" firstStartedPulling="2025-11-22 13:05:12.258706898 +0000 UTC m=+8832.498151392" lastFinishedPulling="2025-11-22 13:05:15.840653276 +0000 UTC m=+8836.080097770" observedRunningTime="2025-11-22 13:05:16.349132087 +0000 UTC m=+8836.588576581" watchObservedRunningTime="2025-11-22 13:05:16.353961217 +0000 UTC m=+8836.593405701" Nov 22 13:05:18 crc kubenswrapper[4772]: I1122 13:05:18.310994 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:18 crc kubenswrapper[4772]: I1122 13:05:18.311700 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:18 crc kubenswrapper[4772]: I1122 13:05:18.382850 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:18 crc kubenswrapper[4772]: I1122 13:05:18.450331 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:19 crc kubenswrapper[4772]: I1122 13:05:19.622224 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9z42"] Nov 22 13:05:20 crc kubenswrapper[4772]: I1122 13:05:20.368111 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9z42" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="registry-server" containerID="cri-o://9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa" gracePeriod=2 Nov 22 13:05:20 crc kubenswrapper[4772]: I1122 13:05:20.875965 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.058592 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wct9\" (UniqueName: \"kubernetes.io/projected/f55b45d0-52f9-4272-a15e-b87d0211c161-kube-api-access-2wct9\") pod \"f55b45d0-52f9-4272-a15e-b87d0211c161\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.058740 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-utilities\") pod \"f55b45d0-52f9-4272-a15e-b87d0211c161\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.058903 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-catalog-content\") pod \"f55b45d0-52f9-4272-a15e-b87d0211c161\" (UID: \"f55b45d0-52f9-4272-a15e-b87d0211c161\") " Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.061983 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-utilities" (OuterVolumeSpecName: "utilities") pod "f55b45d0-52f9-4272-a15e-b87d0211c161" (UID: "f55b45d0-52f9-4272-a15e-b87d0211c161"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.066628 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55b45d0-52f9-4272-a15e-b87d0211c161-kube-api-access-2wct9" (OuterVolumeSpecName: "kube-api-access-2wct9") pod "f55b45d0-52f9-4272-a15e-b87d0211c161" (UID: "f55b45d0-52f9-4272-a15e-b87d0211c161"). InnerVolumeSpecName "kube-api-access-2wct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.077272 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f55b45d0-52f9-4272-a15e-b87d0211c161" (UID: "f55b45d0-52f9-4272-a15e-b87d0211c161"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.162388 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wct9\" (UniqueName: \"kubernetes.io/projected/f55b45d0-52f9-4272-a15e-b87d0211c161-kube-api-access-2wct9\") on node \"crc\" DevicePath \"\"" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.162686 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.162795 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f55b45d0-52f9-4272-a15e-b87d0211c161-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.368295 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.368829 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.387645 4772 generic.go:334] "Generic (PLEG): container finished" podID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerID="9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa" exitCode=0 Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.387710 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9z42" event={"ID":"f55b45d0-52f9-4272-a15e-b87d0211c161","Type":"ContainerDied","Data":"9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa"} Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.387783 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9z42" event={"ID":"f55b45d0-52f9-4272-a15e-b87d0211c161","Type":"ContainerDied","Data":"add303d48a30f14e7e6836b063fe64990f03ccf8e8de86f49f7ed899ab5d672c"} Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.388158 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9z42" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.388228 4772 scope.go:117] "RemoveContainer" containerID="9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.449688 4772 scope.go:117] "RemoveContainer" containerID="55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.458120 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.464232 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9z42"] Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.478599 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9z42"] Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.479577 4772 scope.go:117] "RemoveContainer" containerID="120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.550399 4772 scope.go:117] "RemoveContainer" containerID="9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa" Nov 22 13:05:21 crc kubenswrapper[4772]: E1122 13:05:21.560262 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa\": container with ID starting with 9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa not found: ID does not exist" containerID="9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.560309 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa"} err="failed to get container status \"9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa\": rpc error: code = NotFound desc = could not find container \"9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa\": container with ID starting with 9119e84be201c33ee378bf129110c7a9d104133f16782f268d392965630a4dfa not found: ID does not exist" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.560339 4772 scope.go:117] "RemoveContainer" containerID="55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29" Nov 22 13:05:21 crc kubenswrapper[4772]: E1122 13:05:21.560871 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29\": container with ID starting with 55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29 not found: ID does not exist" containerID="55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.560949 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29"} err="failed to get container status \"55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29\": rpc error: code = NotFound desc = could not find container \"55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29\": container with ID starting with 55fc6745927281c4c16a19f988a3e523a47636e9e986a837e6a8a5223dbdac29 not found: ID does not exist" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.560985 4772 scope.go:117] "RemoveContainer" containerID="120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90" Nov 22 13:05:21 crc kubenswrapper[4772]: E1122 13:05:21.563399 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90\": container with ID starting with 120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90 not found: ID does not exist" containerID="120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90" Nov 22 13:05:21 crc kubenswrapper[4772]: I1122 13:05:21.563429 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90"} err="failed to get container status \"120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90\": rpc error: code = NotFound desc = could not find container \"120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90\": container with ID starting with 120678ff9b41c631b6baa8aa2e1eaf868148c954602ce43e84e0e745fb17cd90 not found: ID does not exist" Nov 22 13:05:22 crc kubenswrapper[4772]: I1122 13:05:22.447572 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:23 crc kubenswrapper[4772]: I1122 13:05:23.427608 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" path="/var/lib/kubelet/pods/f55b45d0-52f9-4272-a15e-b87d0211c161/volumes" Nov 22 13:05:23 crc kubenswrapper[4772]: I1122 13:05:23.827990 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqmbg"] Nov 22 13:05:25 crc kubenswrapper[4772]: I1122 13:05:25.443625 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lqmbg" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="registry-server" containerID="cri-o://d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022" gracePeriod=2 Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.032129 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.094686 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-utilities\") pod \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.094811 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g76tz\" (UniqueName: \"kubernetes.io/projected/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-kube-api-access-g76tz\") pod \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.096831 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-utilities" (OuterVolumeSpecName: "utilities") pod "45bf1e2b-2ae0-406e-b72d-9f059a68ecab" (UID: "45bf1e2b-2ae0-406e-b72d-9f059a68ecab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.102378 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-kube-api-access-g76tz" (OuterVolumeSpecName: "kube-api-access-g76tz") pod "45bf1e2b-2ae0-406e-b72d-9f059a68ecab" (UID: "45bf1e2b-2ae0-406e-b72d-9f059a68ecab"). InnerVolumeSpecName "kube-api-access-g76tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.196248 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-catalog-content\") pod \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\" (UID: \"45bf1e2b-2ae0-406e-b72d-9f059a68ecab\") " Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.197471 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g76tz\" (UniqueName: \"kubernetes.io/projected/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-kube-api-access-g76tz\") on node \"crc\" DevicePath \"\"" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.197495 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.244786 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45bf1e2b-2ae0-406e-b72d-9f059a68ecab" (UID: "45bf1e2b-2ae0-406e-b72d-9f059a68ecab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.298983 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45bf1e2b-2ae0-406e-b72d-9f059a68ecab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.456115 4772 generic.go:334] "Generic (PLEG): container finished" podID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerID="d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022" exitCode=0 Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.456173 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqmbg" event={"ID":"45bf1e2b-2ae0-406e-b72d-9f059a68ecab","Type":"ContainerDied","Data":"d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022"} Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.456208 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqmbg" event={"ID":"45bf1e2b-2ae0-406e-b72d-9f059a68ecab","Type":"ContainerDied","Data":"4d501081b13380418e0c617b90868aa25bb15c207a267f6287b56e0b53a912d1"} Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.456233 4772 scope.go:117] "RemoveContainer" containerID="d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.456418 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqmbg" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.490784 4772 scope.go:117] "RemoveContainer" containerID="a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.502251 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqmbg"] Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.513098 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lqmbg"] Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.516172 4772 scope.go:117] "RemoveContainer" containerID="263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.566623 4772 scope.go:117] "RemoveContainer" containerID="d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022" Nov 22 13:05:26 crc kubenswrapper[4772]: E1122 13:05:26.567085 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022\": container with ID starting with d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022 not found: ID does not exist" containerID="d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.567130 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022"} err="failed to get container status \"d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022\": rpc error: code = NotFound desc = could not find container \"d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022\": container with ID starting with d2a5efadc8c91a2961b9efd592d3c442d6664a154e81ced43bcc8b9404ccf022 not found: ID does not exist" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.567156 4772 scope.go:117] "RemoveContainer" containerID="a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3" Nov 22 13:05:26 crc kubenswrapper[4772]: E1122 13:05:26.567388 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3\": container with ID starting with a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3 not found: ID does not exist" containerID="a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.567421 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3"} err="failed to get container status \"a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3\": rpc error: code = NotFound desc = could not find container \"a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3\": container with ID starting with a093872ce640d9cef28e87bbb6971aa8b3bde6cf0c496f3ddf1343d51b6d1fc3 not found: ID does not exist" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.567436 4772 scope.go:117] "RemoveContainer" containerID="263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e" Nov 22 13:05:26 crc kubenswrapper[4772]: E1122 13:05:26.567731 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e\": container with ID starting with 263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e not found: ID does not exist" containerID="263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e" Nov 22 13:05:26 crc kubenswrapper[4772]: I1122 13:05:26.567771 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e"} err="failed to get container status \"263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e\": rpc error: code = NotFound desc = could not find container \"263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e\": container with ID starting with 263dccab41ee2657d4e4408c51d6bf7ad269cc6cc2637c085e221844b545437e not found: ID does not exist" Nov 22 13:05:27 crc kubenswrapper[4772]: I1122 13:05:27.432604 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" path="/var/lib/kubelet/pods/45bf1e2b-2ae0-406e-b72d-9f059a68ecab/volumes" Nov 22 13:05:29 crc kubenswrapper[4772]: I1122 13:05:29.413439 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:05:29 crc kubenswrapper[4772]: E1122 13:05:29.414095 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:05:44 crc kubenswrapper[4772]: I1122 13:05:44.413471 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:05:44 crc kubenswrapper[4772]: E1122 13:05:44.414529 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:05:59 crc kubenswrapper[4772]: I1122 13:05:59.413722 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:05:59 crc kubenswrapper[4772]: E1122 13:05:59.414414 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:06:10 crc kubenswrapper[4772]: I1122 13:06:10.414203 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:06:10 crc kubenswrapper[4772]: I1122 13:06:10.943519 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"2ea72bc241df07d653f70fa21c2180bbaa140e9970b64a91f86789fd5c03b741"} Nov 22 13:06:24 crc kubenswrapper[4772]: I1122 13:06:24.100264 4772 generic.go:334] "Generic (PLEG): container finished" podID="de658c90-11c4-4861-b1df-5dfb7da0bdf0" containerID="d09010202b300f1ff85313a66d41a3bb385cf2b4e41dee93d35c02bd5556a51c" exitCode=0 Nov 22 13:06:24 crc kubenswrapper[4772]: I1122 13:06:24.100470 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" event={"ID":"de658c90-11c4-4861-b1df-5dfb7da0bdf0","Type":"ContainerDied","Data":"d09010202b300f1ff85313a66d41a3bb385cf2b4e41dee93d35c02bd5556a51c"} Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.471835 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.593406 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-agent-neutron-config-0\") pod \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.593480 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ssh-key\") pod \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.593527 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-inventory\") pod \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.593570 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-combined-ca-bundle\") pod \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.593853 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjsr\" (UniqueName: \"kubernetes.io/projected/de658c90-11c4-4861-b1df-5dfb7da0bdf0-kube-api-access-4fjsr\") pod \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.593878 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ceph\") pod \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\" (UID: \"de658c90-11c4-4861-b1df-5dfb7da0bdf0\") " Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.600537 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "de658c90-11c4-4861-b1df-5dfb7da0bdf0" (UID: "de658c90-11c4-4861-b1df-5dfb7da0bdf0"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.602578 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ceph" (OuterVolumeSpecName: "ceph") pod "de658c90-11c4-4861-b1df-5dfb7da0bdf0" (UID: "de658c90-11c4-4861-b1df-5dfb7da0bdf0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.604441 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de658c90-11c4-4861-b1df-5dfb7da0bdf0-kube-api-access-4fjsr" (OuterVolumeSpecName: "kube-api-access-4fjsr") pod "de658c90-11c4-4861-b1df-5dfb7da0bdf0" (UID: "de658c90-11c4-4861-b1df-5dfb7da0bdf0"). InnerVolumeSpecName "kube-api-access-4fjsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.648614 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "de658c90-11c4-4861-b1df-5dfb7da0bdf0" (UID: "de658c90-11c4-4861-b1df-5dfb7da0bdf0"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.651760 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-inventory" (OuterVolumeSpecName: "inventory") pod "de658c90-11c4-4861-b1df-5dfb7da0bdf0" (UID: "de658c90-11c4-4861-b1df-5dfb7da0bdf0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.665777 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "de658c90-11c4-4861-b1df-5dfb7da0bdf0" (UID: "de658c90-11c4-4861-b1df-5dfb7da0bdf0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.696812 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjsr\" (UniqueName: \"kubernetes.io/projected/de658c90-11c4-4861-b1df-5dfb7da0bdf0-kube-api-access-4fjsr\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.696844 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.696856 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.696883 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.696891 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:26 crc kubenswrapper[4772]: I1122 13:06:26.696902 4772 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de658c90-11c4-4861-b1df-5dfb7da0bdf0-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:27 crc kubenswrapper[4772]: I1122 13:06:27.135698 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" event={"ID":"de658c90-11c4-4861-b1df-5dfb7da0bdf0","Type":"ContainerDied","Data":"c26055bc5bd6c1135718231d72069b12770c93f05f414b2e558f9a5b2c54fd9b"} Nov 22 13:06:27 crc kubenswrapper[4772]: I1122 13:06:27.135993 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26055bc5bd6c1135718231d72069b12770c93f05f414b2e558f9a5b2c54fd9b" Nov 22 13:06:27 crc kubenswrapper[4772]: I1122 13:06:27.135960 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-jwgtt" Nov 22 13:06:53 crc kubenswrapper[4772]: I1122 13:06:53.536801 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 13:06:53 crc kubenswrapper[4772]: I1122 13:06:53.537476 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="1b309be3-8476-41bd-a801-d93e986b8f8e" containerName="nova-cell0-conductor-conductor" containerID="cri-o://41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" gracePeriod=30 Nov 22 13:06:53 crc kubenswrapper[4772]: I1122 13:06:53.571941 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 13:06:53 crc kubenswrapper[4772]: I1122 13:06:53.572163 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="b6e24976-7404-46e3-8cbf-e71d378c8bff" containerName="nova-cell1-conductor-conductor" containerID="cri-o://65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" gracePeriod=30 Nov 22 13:06:53 crc kubenswrapper[4772]: E1122 13:06:53.813268 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 13:06:53 crc kubenswrapper[4772]: E1122 13:06:53.815282 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 13:06:53 crc kubenswrapper[4772]: E1122 13:06:53.817381 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 13:06:53 crc kubenswrapper[4772]: E1122 13:06:53.817468 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="b6e24976-7404-46e3-8cbf-e71d378c8bff" containerName="nova-cell1-conductor-conductor" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.116129 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs"] Nov 22 13:06:54 crc kubenswrapper[4772]: E1122 13:06:54.116931 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="extract-utilities" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.116961 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="extract-utilities" Nov 22 13:06:54 crc kubenswrapper[4772]: E1122 13:06:54.116982 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="extract-content" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.116990 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="extract-content" Nov 22 13:06:54 crc kubenswrapper[4772]: E1122 13:06:54.117018 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="registry-server" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117024 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="registry-server" Nov 22 13:06:54 crc kubenswrapper[4772]: E1122 13:06:54.117071 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="extract-content" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117077 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="extract-content" Nov 22 13:06:54 crc kubenswrapper[4772]: E1122 13:06:54.117100 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="registry-server" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117107 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="registry-server" Nov 22 13:06:54 crc kubenswrapper[4772]: E1122 13:06:54.117124 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de658c90-11c4-4861-b1df-5dfb7da0bdf0" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117132 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="de658c90-11c4-4861-b1df-5dfb7da0bdf0" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 22 13:06:54 crc kubenswrapper[4772]: E1122 13:06:54.117147 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="extract-utilities" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117153 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="extract-utilities" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117387 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55b45d0-52f9-4272-a15e-b87d0211c161" containerName="registry-server" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117409 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="de658c90-11c4-4861-b1df-5dfb7da0bdf0" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.117430 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bf1e2b-2ae0-406e-b72d-9f059a68ecab" containerName="registry-server" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.118634 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.123056 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.123147 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.123233 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.123576 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-6s2nz" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.123624 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.123630 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.126127 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.131240 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs"] Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.239623 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmmbc\" (UniqueName: \"kubernetes.io/projected/819a5030-8de8-4772-86bf-9d6fb6f8de4e-kube-api-access-tmmbc\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240111 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240168 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240301 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240363 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240648 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240776 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240942 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.240986 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.241165 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.241253 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.343732 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.343824 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.343912 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.343947 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.344168 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.344240 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.344340 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmmbc\" (UniqueName: \"kubernetes.io/projected/819a5030-8de8-4772-86bf-9d6fb6f8de4e-kube-api-access-tmmbc\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.344606 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.345147 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.345240 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.345341 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.345566 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.345584 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.353001 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.354533 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.354759 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.354817 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.355068 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.359128 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.360576 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.379038 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.397985 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.402938 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-log" containerID="cri-o://21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7" gracePeriod=30 Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.403827 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-api" containerID="cri-o://603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4" gracePeriod=30 Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.404832 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmmbc\" (UniqueName: \"kubernetes.io/projected/819a5030-8de8-4772-86bf-9d6fb6f8de4e-kube-api-access-tmmbc\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.443520 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.443779 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9be96cca-27a5-448b-b4bb-b3ef4100f9ac" containerName="nova-scheduler-scheduler" containerID="cri-o://e335096989f4023536085730b2683fc6bc5dd959f5c3558ed49222e29ac0b0f8" gracePeriod=30 Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.458234 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.467431 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.467658 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-log" containerID="cri-o://783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec" gracePeriod=30 Nov 22 13:06:54 crc kubenswrapper[4772]: I1122 13:06:54.468095 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-metadata" containerID="cri-o://0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3" gracePeriod=30 Nov 22 13:06:55 crc kubenswrapper[4772]: I1122 13:06:55.142308 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs"] Nov 22 13:06:55 crc kubenswrapper[4772]: I1122 13:06:55.544487 4772 generic.go:334] "Generic (PLEG): container finished" podID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerID="21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7" exitCode=143 Nov 22 13:06:55 crc kubenswrapper[4772]: I1122 13:06:55.544596 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d014cf6c-d393-49b4-9fad-fd21919ea793","Type":"ContainerDied","Data":"21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7"} Nov 22 13:06:55 crc kubenswrapper[4772]: I1122 13:06:55.550425 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" event={"ID":"819a5030-8de8-4772-86bf-9d6fb6f8de4e","Type":"ContainerStarted","Data":"0df48a54e2c3027bcfa75e8d56866cbd4c7b0e97f3aa0231fb3dbe897c130912"} Nov 22 13:06:55 crc kubenswrapper[4772]: I1122 13:06:55.567813 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa54ea86-921f-4d80-86ac-312754960a02" containerID="783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec" exitCode=143 Nov 22 13:06:55 crc kubenswrapper[4772]: I1122 13:06:55.567865 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa54ea86-921f-4d80-86ac-312754960a02","Type":"ContainerDied","Data":"783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec"} Nov 22 13:06:55 crc kubenswrapper[4772]: E1122 13:06:55.665524 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 13:06:55 crc kubenswrapper[4772]: E1122 13:06:55.669201 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 13:06:55 crc kubenswrapper[4772]: E1122 13:06:55.670771 4772 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 13:06:55 crc kubenswrapper[4772]: E1122 13:06:55.670843 4772 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="1b309be3-8476-41bd-a801-d93e986b8f8e" containerName="nova-cell0-conductor-conductor" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.211291 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.303810 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-config-data\") pod \"1b309be3-8476-41bd-a801-d93e986b8f8e\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.303996 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x6hl\" (UniqueName: \"kubernetes.io/projected/1b309be3-8476-41bd-a801-d93e986b8f8e-kube-api-access-4x6hl\") pod \"1b309be3-8476-41bd-a801-d93e986b8f8e\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.304145 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-combined-ca-bundle\") pod \"1b309be3-8476-41bd-a801-d93e986b8f8e\" (UID: \"1b309be3-8476-41bd-a801-d93e986b8f8e\") " Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.312069 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b309be3-8476-41bd-a801-d93e986b8f8e-kube-api-access-4x6hl" (OuterVolumeSpecName: "kube-api-access-4x6hl") pod "1b309be3-8476-41bd-a801-d93e986b8f8e" (UID: "1b309be3-8476-41bd-a801-d93e986b8f8e"). InnerVolumeSpecName "kube-api-access-4x6hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.339821 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-config-data" (OuterVolumeSpecName: "config-data") pod "1b309be3-8476-41bd-a801-d93e986b8f8e" (UID: "1b309be3-8476-41bd-a801-d93e986b8f8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.347454 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b309be3-8476-41bd-a801-d93e986b8f8e" (UID: "1b309be3-8476-41bd-a801-d93e986b8f8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.406447 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x6hl\" (UniqueName: \"kubernetes.io/projected/1b309be3-8476-41bd-a801-d93e986b8f8e-kube-api-access-4x6hl\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.406486 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.406496 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b309be3-8476-41bd-a801-d93e986b8f8e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.583882 4772 generic.go:334] "Generic (PLEG): container finished" podID="1b309be3-8476-41bd-a801-d93e986b8f8e" containerID="41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" exitCode=0 Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.583961 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b309be3-8476-41bd-a801-d93e986b8f8e","Type":"ContainerDied","Data":"41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3"} Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.584006 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b309be3-8476-41bd-a801-d93e986b8f8e","Type":"ContainerDied","Data":"9a04e8ee60144497bdb4f21fb1566597c9ec19b14c97ac855f30b5917af320f0"} Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.584028 4772 scope.go:117] "RemoveContainer" containerID="41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.584807 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.588599 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" event={"ID":"819a5030-8de8-4772-86bf-9d6fb6f8de4e","Type":"ContainerStarted","Data":"5218a190bdb1ffbd7385cf04dd6cf52b7c51599a50a56fe4d91182993cab5583"} Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.656793 4772 scope.go:117] "RemoveContainer" containerID="41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" Nov 22 13:06:56 crc kubenswrapper[4772]: E1122 13:06:56.658319 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3\": container with ID starting with 41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3 not found: ID does not exist" containerID="41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.658355 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3"} err="failed to get container status \"41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3\": rpc error: code = NotFound desc = could not find container \"41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3\": container with ID starting with 41128fabedcca658a5e1688648bc72c79ff52802feac833d39e78763ef1092a3 not found: ID does not exist" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.667956 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" podStartSLOduration=2.250334518 podStartE2EDuration="2.667921537s" podCreationTimestamp="2025-11-22 13:06:54 +0000 UTC" firstStartedPulling="2025-11-22 13:06:55.167437461 +0000 UTC m=+8935.406881955" lastFinishedPulling="2025-11-22 13:06:55.58502447 +0000 UTC m=+8935.824468974" observedRunningTime="2025-11-22 13:06:56.630208992 +0000 UTC m=+8936.869653486" watchObservedRunningTime="2025-11-22 13:06:56.667921537 +0000 UTC m=+8936.907366021" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.692863 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.718070 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.730002 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 13:06:56 crc kubenswrapper[4772]: E1122 13:06:56.731123 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b309be3-8476-41bd-a801-d93e986b8f8e" containerName="nova-cell0-conductor-conductor" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.731148 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b309be3-8476-41bd-a801-d93e986b8f8e" containerName="nova-cell0-conductor-conductor" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.731417 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b309be3-8476-41bd-a801-d93e986b8f8e" containerName="nova-cell0-conductor-conductor" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.732468 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.736866 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.742488 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.818253 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aa8385-dcce-4adf-b113-79460b95e145-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.818316 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxnq\" (UniqueName: \"kubernetes.io/projected/47aa8385-dcce-4adf-b113-79460b95e145-kube-api-access-rfxnq\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.818342 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa8385-dcce-4adf-b113-79460b95e145-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.928800 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aa8385-dcce-4adf-b113-79460b95e145-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.928997 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxnq\" (UniqueName: \"kubernetes.io/projected/47aa8385-dcce-4adf-b113-79460b95e145-kube-api-access-rfxnq\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.929109 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa8385-dcce-4adf-b113-79460b95e145-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.935100 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa8385-dcce-4adf-b113-79460b95e145-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.938999 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aa8385-dcce-4adf-b113-79460b95e145-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:56 crc kubenswrapper[4772]: I1122 13:06:56.962522 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxnq\" (UniqueName: \"kubernetes.io/projected/47aa8385-dcce-4adf-b113-79460b95e145-kube-api-access-rfxnq\") pod \"nova-cell0-conductor-0\" (UID: \"47aa8385-dcce-4adf-b113-79460b95e145\") " pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.060335 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.225801 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.339549 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-combined-ca-bundle\") pod \"b6e24976-7404-46e3-8cbf-e71d378c8bff\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.339615 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-844q9\" (UniqueName: \"kubernetes.io/projected/b6e24976-7404-46e3-8cbf-e71d378c8bff-kube-api-access-844q9\") pod \"b6e24976-7404-46e3-8cbf-e71d378c8bff\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.339709 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-config-data\") pod \"b6e24976-7404-46e3-8cbf-e71d378c8bff\" (UID: \"b6e24976-7404-46e3-8cbf-e71d378c8bff\") " Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.385985 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e24976-7404-46e3-8cbf-e71d378c8bff-kube-api-access-844q9" (OuterVolumeSpecName: "kube-api-access-844q9") pod "b6e24976-7404-46e3-8cbf-e71d378c8bff" (UID: "b6e24976-7404-46e3-8cbf-e71d378c8bff"). InnerVolumeSpecName "kube-api-access-844q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.400754 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-config-data" (OuterVolumeSpecName: "config-data") pod "b6e24976-7404-46e3-8cbf-e71d378c8bff" (UID: "b6e24976-7404-46e3-8cbf-e71d378c8bff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.434666 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6e24976-7404-46e3-8cbf-e71d378c8bff" (UID: "b6e24976-7404-46e3-8cbf-e71d378c8bff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.436077 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b309be3-8476-41bd-a801-d93e986b8f8e" path="/var/lib/kubelet/pods/1b309be3-8476-41bd-a801-d93e986b8f8e/volumes" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.443726 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.443770 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-844q9\" (UniqueName: \"kubernetes.io/projected/b6e24976-7404-46e3-8cbf-e71d378c8bff-kube-api-access-844q9\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.443792 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e24976-7404-46e3-8cbf-e71d378c8bff-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.577441 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 13:06:57 crc kubenswrapper[4772]: W1122 13:06:57.582240 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47aa8385_dcce_4adf_b113_79460b95e145.slice/crio-4d38c10a0b53623b819d19d6f62b8fd422cbbe7b0723c5babe50f96617089a89 WatchSource:0}: Error finding container 4d38c10a0b53623b819d19d6f62b8fd422cbbe7b0723c5babe50f96617089a89: Status 404 returned error can't find the container with id 4d38c10a0b53623b819d19d6f62b8fd422cbbe7b0723c5babe50f96617089a89 Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.606189 4772 generic.go:334] "Generic (PLEG): container finished" podID="b6e24976-7404-46e3-8cbf-e71d378c8bff" containerID="65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" exitCode=0 Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.606283 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b6e24976-7404-46e3-8cbf-e71d378c8bff","Type":"ContainerDied","Data":"65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe"} Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.606312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b6e24976-7404-46e3-8cbf-e71d378c8bff","Type":"ContainerDied","Data":"1fe99e24482c9e657c1ed2ceb21e2d6fc5ecc17c478618dd9e260d9c9fe5bc61"} Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.606378 4772 scope.go:117] "RemoveContainer" containerID="65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.606545 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.614501 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"47aa8385-dcce-4adf-b113-79460b95e145","Type":"ContainerStarted","Data":"4d38c10a0b53623b819d19d6f62b8fd422cbbe7b0723c5babe50f96617089a89"} Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.726557 4772 scope.go:117] "RemoveContainer" containerID="65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" Nov 22 13:06:57 crc kubenswrapper[4772]: E1122 13:06:57.736237 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe\": container with ID starting with 65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe not found: ID does not exist" containerID="65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.736291 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe"} err="failed to get container status \"65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe\": rpc error: code = NotFound desc = could not find container \"65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe\": container with ID starting with 65893173ceacbe538d1c862823e11c7300e7b68e68c20733fabb4002e997abbe not found: ID does not exist" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.764342 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.779788 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.790381 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 13:06:57 crc kubenswrapper[4772]: E1122 13:06:57.791088 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e24976-7404-46e3-8cbf-e71d378c8bff" containerName="nova-cell1-conductor-conductor" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.791110 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e24976-7404-46e3-8cbf-e71d378c8bff" containerName="nova-cell1-conductor-conductor" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.791370 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e24976-7404-46e3-8cbf-e71d378c8bff" containerName="nova-cell1-conductor-conductor" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.792208 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.794607 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.822465 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.840536 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": read tcp 10.217.0.2:53266->10.217.1.84:8774: read: connection reset by peer" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.840538 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": read tcp 10.217.0.2:53262->10.217.1.84:8774: read: connection reset by peer" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.857762 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpldp\" (UniqueName: \"kubernetes.io/projected/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-kube-api-access-vpldp\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.857937 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.858205 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.900225 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.83:8775/\": read tcp 10.217.0.2:43964->10.217.1.83:8775: read: connection reset by peer" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.900703 4772 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.83:8775/\": read tcp 10.217.0.2:43960->10.217.1.83:8775: read: connection reset by peer" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.961140 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpldp\" (UniqueName: \"kubernetes.io/projected/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-kube-api-access-vpldp\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.962363 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.962787 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.970137 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:57 crc kubenswrapper[4772]: I1122 13:06:57.995511 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.000951 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpldp\" (UniqueName: \"kubernetes.io/projected/86c2a53a-f52b-49b3-9bc3-105cf5918b7f-kube-api-access-vpldp\") pod \"nova-cell1-conductor-0\" (UID: \"86c2a53a-f52b-49b3-9bc3-105cf5918b7f\") " pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.121792 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.296866 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.371932 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d014cf6c-d393-49b4-9fad-fd21919ea793-logs\") pod \"d014cf6c-d393-49b4-9fad-fd21919ea793\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.372230 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-combined-ca-bundle\") pod \"d014cf6c-d393-49b4-9fad-fd21919ea793\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.372303 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-config-data\") pod \"d014cf6c-d393-49b4-9fad-fd21919ea793\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.372329 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/d014cf6c-d393-49b4-9fad-fd21919ea793-kube-api-access-7d5fq\") pod \"d014cf6c-d393-49b4-9fad-fd21919ea793\" (UID: \"d014cf6c-d393-49b4-9fad-fd21919ea793\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.372683 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d014cf6c-d393-49b4-9fad-fd21919ea793-logs" (OuterVolumeSpecName: "logs") pod "d014cf6c-d393-49b4-9fad-fd21919ea793" (UID: "d014cf6c-d393-49b4-9fad-fd21919ea793"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.372913 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d014cf6c-d393-49b4-9fad-fd21919ea793-logs\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.388365 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d014cf6c-d393-49b4-9fad-fd21919ea793-kube-api-access-7d5fq" (OuterVolumeSpecName: "kube-api-access-7d5fq") pod "d014cf6c-d393-49b4-9fad-fd21919ea793" (UID: "d014cf6c-d393-49b4-9fad-fd21919ea793"). InnerVolumeSpecName "kube-api-access-7d5fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.413353 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-config-data" (OuterVolumeSpecName: "config-data") pod "d014cf6c-d393-49b4-9fad-fd21919ea793" (UID: "d014cf6c-d393-49b4-9fad-fd21919ea793"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.434356 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d014cf6c-d393-49b4-9fad-fd21919ea793" (UID: "d014cf6c-d393-49b4-9fad-fd21919ea793"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.475268 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.475302 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d014cf6c-d393-49b4-9fad-fd21919ea793-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.475312 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/d014cf6c-d393-49b4-9fad-fd21919ea793-kube-api-access-7d5fq\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.595867 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.636218 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"47aa8385-dcce-4adf-b113-79460b95e145","Type":"ContainerStarted","Data":"43ea317c91117ad027969d983da058198caee987c3419f3710a63d6d575c7f00"} Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.636328 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.638338 4772 generic.go:334] "Generic (PLEG): container finished" podID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerID="603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4" exitCode=0 Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.638456 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d014cf6c-d393-49b4-9fad-fd21919ea793","Type":"ContainerDied","Data":"603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4"} Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.638485 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.638560 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d014cf6c-d393-49b4-9fad-fd21919ea793","Type":"ContainerDied","Data":"d13bda513fc110c4ee7632cb601e0b3416af516e36ef7f2c1dddd598f3e42723"} Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.638630 4772 scope.go:117] "RemoveContainer" containerID="603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.642946 4772 generic.go:334] "Generic (PLEG): container finished" podID="fa54ea86-921f-4d80-86ac-312754960a02" containerID="0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3" exitCode=0 Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.643013 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.643021 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa54ea86-921f-4d80-86ac-312754960a02","Type":"ContainerDied","Data":"0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3"} Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.643071 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fa54ea86-921f-4d80-86ac-312754960a02","Type":"ContainerDied","Data":"66101db6b7aae609b49789256c1de2481a9a4ff7dc2a51c2dc492b289aa1c732"} Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.645697 4772 generic.go:334] "Generic (PLEG): container finished" podID="9be96cca-27a5-448b-b4bb-b3ef4100f9ac" containerID="e335096989f4023536085730b2683fc6bc5dd959f5c3558ed49222e29ac0b0f8" exitCode=0 Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.645785 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be96cca-27a5-448b-b4bb-b3ef4100f9ac","Type":"ContainerDied","Data":"e335096989f4023536085730b2683fc6bc5dd959f5c3558ed49222e29ac0b0f8"} Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.645818 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9be96cca-27a5-448b-b4bb-b3ef4100f9ac","Type":"ContainerDied","Data":"74015930318d18fb1574d1e3bd8658dd083b159928c6f3e6123f6041e64ba7d2"} Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.645833 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74015930318d18fb1574d1e3bd8658dd083b159928c6f3e6123f6041e64ba7d2" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.680321 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plcz2\" (UniqueName: \"kubernetes.io/projected/fa54ea86-921f-4d80-86ac-312754960a02-kube-api-access-plcz2\") pod \"fa54ea86-921f-4d80-86ac-312754960a02\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.680556 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa54ea86-921f-4d80-86ac-312754960a02-logs\") pod \"fa54ea86-921f-4d80-86ac-312754960a02\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.680731 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-config-data\") pod \"fa54ea86-921f-4d80-86ac-312754960a02\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.680783 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-combined-ca-bundle\") pod \"fa54ea86-921f-4d80-86ac-312754960a02\" (UID: \"fa54ea86-921f-4d80-86ac-312754960a02\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.697760 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa54ea86-921f-4d80-86ac-312754960a02-logs" (OuterVolumeSpecName: "logs") pod "fa54ea86-921f-4d80-86ac-312754960a02" (UID: "fa54ea86-921f-4d80-86ac-312754960a02"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.704494 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa54ea86-921f-4d80-86ac-312754960a02-kube-api-access-plcz2" (OuterVolumeSpecName: "kube-api-access-plcz2") pod "fa54ea86-921f-4d80-86ac-312754960a02" (UID: "fa54ea86-921f-4d80-86ac-312754960a02"). InnerVolumeSpecName "kube-api-access-plcz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.725097 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.737289 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.737250748 podStartE2EDuration="2.737250748s" podCreationTimestamp="2025-11-22 13:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 13:06:58.687754292 +0000 UTC m=+8938.927198816" watchObservedRunningTime="2025-11-22 13:06:58.737250748 +0000 UTC m=+8938.976695242" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.747276 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-config-data" (OuterVolumeSpecName: "config-data") pod "fa54ea86-921f-4d80-86ac-312754960a02" (UID: "fa54ea86-921f-4d80-86ac-312754960a02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.751306 4772 scope.go:117] "RemoveContainer" containerID="21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.771088 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plcz2\" (UniqueName: \"kubernetes.io/projected/fa54ea86-921f-4d80-86ac-312754960a02-kube-api-access-plcz2\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.771140 4772 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa54ea86-921f-4d80-86ac-312754960a02-logs\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.771151 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.802551 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.802652 4772 scope.go:117] "RemoveContainer" containerID="603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.803736 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4\": container with ID starting with 603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4 not found: ID does not exist" containerID="603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.803805 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4"} err="failed to get container status \"603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4\": rpc error: code = NotFound desc = could not find container \"603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4\": container with ID starting with 603d33a5120b6e93b5b1d488645348a16746b46bb8c4c8210ec7665648a457b4 not found: ID does not exist" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.803834 4772 scope.go:117] "RemoveContainer" containerID="21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.811304 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7\": container with ID starting with 21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7 not found: ID does not exist" containerID="21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.811366 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7"} err="failed to get container status \"21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7\": rpc error: code = NotFound desc = could not find container \"21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7\": container with ID starting with 21270f35d0f4538264131ba651a27a1d785dff4e01d31983047f26756dd100e7 not found: ID does not exist" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.811398 4772 scope.go:117] "RemoveContainer" containerID="0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.821861 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.846903 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.847502 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be96cca-27a5-448b-b4bb-b3ef4100f9ac" containerName="nova-scheduler-scheduler" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847521 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be96cca-27a5-448b-b4bb-b3ef4100f9ac" containerName="nova-scheduler-scheduler" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.847534 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-log" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847540 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-log" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.847557 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-metadata" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847563 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-metadata" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.847584 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-log" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847591 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-log" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.847606 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-api" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847612 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-api" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847835 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-log" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847862 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be96cca-27a5-448b-b4bb-b3ef4100f9ac" containerName="nova-scheduler-scheduler" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847876 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-log" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847887 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa54ea86-921f-4d80-86ac-312754960a02" containerName="nova-metadata-metadata" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.847900 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" containerName="nova-api-api" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.849762 4772 scope.go:117] "RemoveContainer" containerID="783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.850015 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.855771 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.858072 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.860624 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa54ea86-921f-4d80-86ac-312754960a02" (UID: "fa54ea86-921f-4d80-86ac-312754960a02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.875651 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-combined-ca-bundle\") pod \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.875840 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm8vk\" (UniqueName: \"kubernetes.io/projected/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-kube-api-access-fm8vk\") pod \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.876000 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-config-data\") pod \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\" (UID: \"9be96cca-27a5-448b-b4bb-b3ef4100f9ac\") " Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.876896 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa54ea86-921f-4d80-86ac-312754960a02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.895904 4772 scope.go:117] "RemoveContainer" containerID="0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.895916 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-kube-api-access-fm8vk" (OuterVolumeSpecName: "kube-api-access-fm8vk") pod "9be96cca-27a5-448b-b4bb-b3ef4100f9ac" (UID: "9be96cca-27a5-448b-b4bb-b3ef4100f9ac"). InnerVolumeSpecName "kube-api-access-fm8vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.896396 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3\": container with ID starting with 0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3 not found: ID does not exist" containerID="0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.896433 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3"} err="failed to get container status \"0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3\": rpc error: code = NotFound desc = could not find container \"0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3\": container with ID starting with 0ce0af66b3b6604b57d14136f070af56fe99057588fed90908df96ce1b319ab3 not found: ID does not exist" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.896462 4772 scope.go:117] "RemoveContainer" containerID="783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec" Nov 22 13:06:58 crc kubenswrapper[4772]: E1122 13:06:58.896970 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec\": container with ID starting with 783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec not found: ID does not exist" containerID="783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.896997 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec"} err="failed to get container status \"783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec\": rpc error: code = NotFound desc = could not find container \"783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec\": container with ID starting with 783e9062d588765b104803aeb2baea67fb263ff935e2a1ab5793fbb7960694ec not found: ID does not exist" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.913638 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-config-data" (OuterVolumeSpecName: "config-data") pod "9be96cca-27a5-448b-b4bb-b3ef4100f9ac" (UID: "9be96cca-27a5-448b-b4bb-b3ef4100f9ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.934175 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9be96cca-27a5-448b-b4bb-b3ef4100f9ac" (UID: "9be96cca-27a5-448b-b4bb-b3ef4100f9ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.984578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-logs\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.984748 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.985022 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkp7s\" (UniqueName: \"kubernetes.io/projected/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-kube-api-access-dkp7s\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.985192 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-config-data\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.985362 4772 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.985389 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm8vk\" (UniqueName: \"kubernetes.io/projected/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-kube-api-access-fm8vk\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.985406 4772 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96cca-27a5-448b-b4bb-b3ef4100f9ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 13:06:58 crc kubenswrapper[4772]: I1122 13:06:58.988853 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.025026 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: W1122 13:06:59.026484 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86c2a53a_f52b_49b3_9bc3_105cf5918b7f.slice/crio-f151d44b31c142d42e8b35a1518448d5af76ab85f4425fbe0672510e22046351 WatchSource:0}: Error finding container f151d44b31c142d42e8b35a1518448d5af76ab85f4425fbe0672510e22046351: Status 404 returned error can't find the container with id f151d44b31c142d42e8b35a1518448d5af76ab85f4425fbe0672510e22046351 Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.044169 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.052171 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.065898 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.066147 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.069437 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.089557 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.089724 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkp7s\" (UniqueName: \"kubernetes.io/projected/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-kube-api-access-dkp7s\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.089765 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-config-data\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.089871 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-logs\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.090503 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-logs\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.095179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-config-data\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.103600 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.116788 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkp7s\" (UniqueName: \"kubernetes.io/projected/29bf3574-4bfe-4b57-90a0-4b76860bfc1c-kube-api-access-dkp7s\") pod \"nova-api-0\" (UID: \"29bf3574-4bfe-4b57-90a0-4b76860bfc1c\") " pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.188383 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.191607 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a715a14b-bf58-4263-8b67-15d6e90adc77-config-data\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.191718 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a715a14b-bf58-4263-8b67-15d6e90adc77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.192162 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a715a14b-bf58-4263-8b67-15d6e90adc77-logs\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.192306 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kkcm\" (UniqueName: \"kubernetes.io/projected/a715a14b-bf58-4263-8b67-15d6e90adc77-kube-api-access-4kkcm\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.294937 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a715a14b-bf58-4263-8b67-15d6e90adc77-config-data\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.295084 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a715a14b-bf58-4263-8b67-15d6e90adc77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.295140 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a715a14b-bf58-4263-8b67-15d6e90adc77-logs\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.295226 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kkcm\" (UniqueName: \"kubernetes.io/projected/a715a14b-bf58-4263-8b67-15d6e90adc77-kube-api-access-4kkcm\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.297234 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a715a14b-bf58-4263-8b67-15d6e90adc77-logs\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.301246 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a715a14b-bf58-4263-8b67-15d6e90adc77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.302247 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a715a14b-bf58-4263-8b67-15d6e90adc77-config-data\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.323385 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kkcm\" (UniqueName: \"kubernetes.io/projected/a715a14b-bf58-4263-8b67-15d6e90adc77-kube-api-access-4kkcm\") pod \"nova-metadata-0\" (UID: \"a715a14b-bf58-4263-8b67-15d6e90adc77\") " pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.417720 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.443127 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e24976-7404-46e3-8cbf-e71d378c8bff" path="/var/lib/kubelet/pods/b6e24976-7404-46e3-8cbf-e71d378c8bff/volumes" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.446072 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d014cf6c-d393-49b4-9fad-fd21919ea793" path="/var/lib/kubelet/pods/d014cf6c-d393-49b4-9fad-fd21919ea793/volumes" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.446901 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa54ea86-921f-4d80-86ac-312754960a02" path="/var/lib/kubelet/pods/fa54ea86-921f-4d80-86ac-312754960a02/volumes" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.689663 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.731130 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"86c2a53a-f52b-49b3-9bc3-105cf5918b7f","Type":"ContainerStarted","Data":"64707a36bde263f25577640ff0b32d0708b1dfeb0b1cd124e5d33d5ca92a8e7d"} Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.731636 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.731656 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"86c2a53a-f52b-49b3-9bc3-105cf5918b7f","Type":"ContainerStarted","Data":"f151d44b31c142d42e8b35a1518448d5af76ab85f4425fbe0672510e22046351"} Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.731484 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.760455 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.760432615 podStartE2EDuration="2.760432615s" podCreationTimestamp="2025-11-22 13:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 13:06:59.747823883 +0000 UTC m=+8939.987268387" watchObservedRunningTime="2025-11-22 13:06:59.760432615 +0000 UTC m=+8939.999877109" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.786915 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.811634 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.822941 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.824552 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.832719 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.833356 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.895304 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 13:06:59 crc kubenswrapper[4772]: W1122 13:06:59.916858 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda715a14b_bf58_4263_8b67_15d6e90adc77.slice/crio-54a5329e798984ff6028af7ae7a6430f91f0322fd903905fc8ed9869d932c793 WatchSource:0}: Error finding container 54a5329e798984ff6028af7ae7a6430f91f0322fd903905fc8ed9869d932c793: Status 404 returned error can't find the container with id 54a5329e798984ff6028af7ae7a6430f91f0322fd903905fc8ed9869d932c793 Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.927592 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znbwh\" (UniqueName: \"kubernetes.io/projected/3366e4a4-1c0a-4314-805b-72e70cd70289-kube-api-access-znbwh\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.927918 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3366e4a4-1c0a-4314-805b-72e70cd70289-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:06:59 crc kubenswrapper[4772]: I1122 13:06:59.927955 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3366e4a4-1c0a-4314-805b-72e70cd70289-config-data\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.031316 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znbwh\" (UniqueName: \"kubernetes.io/projected/3366e4a4-1c0a-4314-805b-72e70cd70289-kube-api-access-znbwh\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.031932 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3366e4a4-1c0a-4314-805b-72e70cd70289-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.031967 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3366e4a4-1c0a-4314-805b-72e70cd70289-config-data\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.057270 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znbwh\" (UniqueName: \"kubernetes.io/projected/3366e4a4-1c0a-4314-805b-72e70cd70289-kube-api-access-znbwh\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.057637 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3366e4a4-1c0a-4314-805b-72e70cd70289-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.058233 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3366e4a4-1c0a-4314-805b-72e70cd70289-config-data\") pod \"nova-scheduler-0\" (UID: \"3366e4a4-1c0a-4314-805b-72e70cd70289\") " pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.195192 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.724487 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 13:07:00 crc kubenswrapper[4772]: W1122 13:07:00.727695 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3366e4a4_1c0a_4314_805b_72e70cd70289.slice/crio-07d4a1c2d73aa8ac2f18ebb7f59b28c8856e5151a4e0e9c6d599fb624f6830da WatchSource:0}: Error finding container 07d4a1c2d73aa8ac2f18ebb7f59b28c8856e5151a4e0e9c6d599fb624f6830da: Status 404 returned error can't find the container with id 07d4a1c2d73aa8ac2f18ebb7f59b28c8856e5151a4e0e9c6d599fb624f6830da Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.748444 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29bf3574-4bfe-4b57-90a0-4b76860bfc1c","Type":"ContainerStarted","Data":"923204cbc95ae2324797f3fb39ace7ad3ed404a36f9cdd3b8aa83e23c5b07bd3"} Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.748496 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29bf3574-4bfe-4b57-90a0-4b76860bfc1c","Type":"ContainerStarted","Data":"63d4fbef179972991e55e1f91f5654c32f9f56df4b1662c31982e2fccc45e683"} Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.748510 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29bf3574-4bfe-4b57-90a0-4b76860bfc1c","Type":"ContainerStarted","Data":"b450ebc2695afc66e2decf1b5ae577642a719d4def0437f4523013f60dc7f0d7"} Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.753291 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a715a14b-bf58-4263-8b67-15d6e90adc77","Type":"ContainerStarted","Data":"045a2dc9c719fd6fbd4f63b4a65ffba0d2726f48bba3a1db6f9ac32f11922f74"} Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.753364 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a715a14b-bf58-4263-8b67-15d6e90adc77","Type":"ContainerStarted","Data":"10aa6fadc088a8139b64538c7a3b58b165c7b5909bb81a4e4fbd35b9a328f93f"} Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.753385 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a715a14b-bf58-4263-8b67-15d6e90adc77","Type":"ContainerStarted","Data":"54a5329e798984ff6028af7ae7a6430f91f0322fd903905fc8ed9869d932c793"} Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.767575 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3366e4a4-1c0a-4314-805b-72e70cd70289","Type":"ContainerStarted","Data":"07d4a1c2d73aa8ac2f18ebb7f59b28c8856e5151a4e0e9c6d599fb624f6830da"} Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.772643 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.772620379 podStartE2EDuration="2.772620379s" podCreationTimestamp="2025-11-22 13:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 13:07:00.766740424 +0000 UTC m=+8941.006184928" watchObservedRunningTime="2025-11-22 13:07:00.772620379 +0000 UTC m=+8941.012064873" Nov 22 13:07:00 crc kubenswrapper[4772]: I1122 13:07:00.812991 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.812972449 podStartE2EDuration="2.812972449s" podCreationTimestamp="2025-11-22 13:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 13:07:00.802536571 +0000 UTC m=+8941.041981065" watchObservedRunningTime="2025-11-22 13:07:00.812972449 +0000 UTC m=+8941.052416943" Nov 22 13:07:01 crc kubenswrapper[4772]: I1122 13:07:01.461742 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be96cca-27a5-448b-b4bb-b3ef4100f9ac" path="/var/lib/kubelet/pods/9be96cca-27a5-448b-b4bb-b3ef4100f9ac/volumes" Nov 22 13:07:01 crc kubenswrapper[4772]: I1122 13:07:01.782020 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3366e4a4-1c0a-4314-805b-72e70cd70289","Type":"ContainerStarted","Data":"73a19fbde053189ca105fc2b6c3eadc814c5b8777096c85f496357ab5c9b73df"} Nov 22 13:07:01 crc kubenswrapper[4772]: I1122 13:07:01.811709 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.81168006 podStartE2EDuration="2.81168006s" podCreationTimestamp="2025-11-22 13:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 13:07:01.798288898 +0000 UTC m=+8942.037733412" watchObservedRunningTime="2025-11-22 13:07:01.81168006 +0000 UTC m=+8942.051124594" Nov 22 13:07:02 crc kubenswrapper[4772]: I1122 13:07:02.092575 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 13:07:04 crc kubenswrapper[4772]: I1122 13:07:04.418521 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 13:07:04 crc kubenswrapper[4772]: I1122 13:07:04.418933 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 13:07:05 crc kubenswrapper[4772]: I1122 13:07:05.195995 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 13:07:08 crc kubenswrapper[4772]: I1122 13:07:08.161794 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 13:07:09 crc kubenswrapper[4772]: I1122 13:07:09.189277 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 13:07:09 crc kubenswrapper[4772]: I1122 13:07:09.190172 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 13:07:09 crc kubenswrapper[4772]: I1122 13:07:09.439870 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 13:07:09 crc kubenswrapper[4772]: I1122 13:07:09.439930 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 13:07:10 crc kubenswrapper[4772]: I1122 13:07:10.189302 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29bf3574-4bfe-4b57-90a0-4b76860bfc1c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 13:07:10 crc kubenswrapper[4772]: I1122 13:07:10.196353 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 13:07:10 crc kubenswrapper[4772]: I1122 13:07:10.230338 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29bf3574-4bfe-4b57-90a0-4b76860bfc1c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 13:07:10 crc kubenswrapper[4772]: I1122 13:07:10.237204 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 13:07:10 crc kubenswrapper[4772]: I1122 13:07:10.500249 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a715a14b-bf58-4263-8b67-15d6e90adc77" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.197:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 13:07:10 crc kubenswrapper[4772]: I1122 13:07:10.500276 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a715a14b-bf58-4263-8b67-15d6e90adc77" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.197:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 13:07:10 crc kubenswrapper[4772]: I1122 13:07:10.913205 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 13:07:18 crc kubenswrapper[4772]: I1122 13:07:18.836858 4772 scope.go:117] "RemoveContainer" containerID="e335096989f4023536085730b2683fc6bc5dd959f5c3558ed49222e29ac0b0f8" Nov 22 13:07:18 crc kubenswrapper[4772]: I1122 13:07:18.869551 4772 scope.go:117] "RemoveContainer" containerID="51d878d829cd05a1011fee8dbd2e699a7a8a7ff6747bdc59069c47c7828c0530" Nov 22 13:07:18 crc kubenswrapper[4772]: I1122 13:07:18.895639 4772 scope.go:117] "RemoveContainer" containerID="c75d18a00141d6126d52ff2dfaccaceb6a7f0a44caf491c243c505e5600aad48" Nov 22 13:07:18 crc kubenswrapper[4772]: I1122 13:07:18.980090 4772 scope.go:117] "RemoveContainer" containerID="b4ef1ec69573161f64b501119e1a7850be2ec2476e09f9b4c08e0bf2b0a8d50c" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.193374 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.193867 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.194903 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.197107 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.427695 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.428096 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.431218 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 13:07:19 crc kubenswrapper[4772]: I1122 13:07:19.431361 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 13:07:20 crc kubenswrapper[4772]: I1122 13:07:20.003892 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 13:07:20 crc kubenswrapper[4772]: I1122 13:07:20.011309 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 13:08:31 crc kubenswrapper[4772]: I1122 13:08:31.533455 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:08:31 crc kubenswrapper[4772]: I1122 13:08:31.534309 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:09:01 crc kubenswrapper[4772]: I1122 13:09:01.532927 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:09:01 crc kubenswrapper[4772]: I1122 13:09:01.533538 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:09:31 crc kubenswrapper[4772]: I1122 13:09:31.533462 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:09:31 crc kubenswrapper[4772]: I1122 13:09:31.534071 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:09:31 crc kubenswrapper[4772]: I1122 13:09:31.534130 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 13:09:31 crc kubenswrapper[4772]: I1122 13:09:31.535017 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ea72bc241df07d653f70fa21c2180bbaa140e9970b64a91f86789fd5c03b741"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 13:09:31 crc kubenswrapper[4772]: I1122 13:09:31.535088 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://2ea72bc241df07d653f70fa21c2180bbaa140e9970b64a91f86789fd5c03b741" gracePeriod=600 Nov 22 13:09:32 crc kubenswrapper[4772]: I1122 13:09:32.606062 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="2ea72bc241df07d653f70fa21c2180bbaa140e9970b64a91f86789fd5c03b741" exitCode=0 Nov 22 13:09:32 crc kubenswrapper[4772]: I1122 13:09:32.606181 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"2ea72bc241df07d653f70fa21c2180bbaa140e9970b64a91f86789fd5c03b741"} Nov 22 13:09:32 crc kubenswrapper[4772]: I1122 13:09:32.606829 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba"} Nov 22 13:09:32 crc kubenswrapper[4772]: I1122 13:09:32.606861 4772 scope.go:117] "RemoveContainer" containerID="5a427682510f2025a1a305f31d041abd640d4c92247ada31a7f887bd361d20ff" Nov 22 13:11:28 crc kubenswrapper[4772]: I1122 13:11:28.002416 4772 generic.go:334] "Generic (PLEG): container finished" podID="819a5030-8de8-4772-86bf-9d6fb6f8de4e" containerID="5218a190bdb1ffbd7385cf04dd6cf52b7c51599a50a56fe4d91182993cab5583" exitCode=0 Nov 22 13:11:28 crc kubenswrapper[4772]: I1122 13:11:28.002494 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" event={"ID":"819a5030-8de8-4772-86bf-9d6fb6f8de4e","Type":"ContainerDied","Data":"5218a190bdb1ffbd7385cf04dd6cf52b7c51599a50a56fe4d91182993cab5583"} Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.512145 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679284 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-0\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679350 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-1\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679371 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-0\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679391 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ceph\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679412 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ssh-key\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679443 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-1\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679461 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-inventory\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679480 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmmbc\" (UniqueName: \"kubernetes.io/projected/819a5030-8de8-4772-86bf-9d6fb6f8de4e-kube-api-access-tmmbc\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679514 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-1\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679566 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-combined-ca-bundle\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.679599 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-0\") pod \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\" (UID: \"819a5030-8de8-4772-86bf-9d6fb6f8de4e\") " Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.686336 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ceph" (OuterVolumeSpecName: "ceph") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.695341 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819a5030-8de8-4772-86bf-9d6fb6f8de4e-kube-api-access-tmmbc" (OuterVolumeSpecName: "kube-api-access-tmmbc") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "kube-api-access-tmmbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.699879 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.708632 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-inventory" (OuterVolumeSpecName: "inventory") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.712902 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.715611 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.716899 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.716932 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.717911 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.719180 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.721379 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "819a5030-8de8-4772-86bf-9d6fb6f8de4e" (UID: "819a5030-8de8-4772-86bf-9d6fb6f8de4e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783489 4772 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783547 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783572 4772 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783588 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmmbc\" (UniqueName: \"kubernetes.io/projected/819a5030-8de8-4772-86bf-9d6fb6f8de4e-kube-api-access-tmmbc\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783602 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783614 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783627 4772 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783645 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783658 4772 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783670 4772 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/819a5030-8de8-4772-86bf-9d6fb6f8de4e-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:29 crc kubenswrapper[4772]: I1122 13:11:29.783684 4772 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/819a5030-8de8-4772-86bf-9d6fb6f8de4e-ceph\") on node \"crc\" DevicePath \"\"" Nov 22 13:11:30 crc kubenswrapper[4772]: I1122 13:11:30.035246 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" event={"ID":"819a5030-8de8-4772-86bf-9d6fb6f8de4e","Type":"ContainerDied","Data":"0df48a54e2c3027bcfa75e8d56866cbd4c7b0e97f3aa0231fb3dbe897c130912"} Nov 22 13:11:30 crc kubenswrapper[4772]: I1122 13:11:30.035358 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df48a54e2c3027bcfa75e8d56866cbd4c7b0e97f3aa0231fb3dbe897c130912" Nov 22 13:11:30 crc kubenswrapper[4772]: I1122 13:11:30.035375 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs" Nov 22 13:11:30 crc kubenswrapper[4772]: E1122 13:11:30.281816 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod819a5030_8de8_4772_86bf_9d6fb6f8de4e.slice/crio-0df48a54e2c3027bcfa75e8d56866cbd4c7b0e97f3aa0231fb3dbe897c130912\": RecentStats: unable to find data in memory cache]" Nov 22 13:11:31 crc kubenswrapper[4772]: I1122 13:11:31.533229 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:11:31 crc kubenswrapper[4772]: I1122 13:11:31.533597 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:11:57 crc kubenswrapper[4772]: E1122 13:11:57.739745 4772 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.150:33978->38.102.83.150:44525: write tcp 38.102.83.150:33978->38.102.83.150:44525: write: broken pipe Nov 22 13:12:01 crc kubenswrapper[4772]: I1122 13:12:01.533482 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:12:01 crc kubenswrapper[4772]: I1122 13:12:01.536698 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.533040 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.533535 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.533580 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.534265 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.534386 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" gracePeriod=600 Nov 22 13:12:31 crc kubenswrapper[4772]: E1122 13:12:31.665333 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.800329 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" exitCode=0 Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.800391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba"} Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.801005 4772 scope.go:117] "RemoveContainer" containerID="2ea72bc241df07d653f70fa21c2180bbaa140e9970b64a91f86789fd5c03b741" Nov 22 13:12:31 crc kubenswrapper[4772]: I1122 13:12:31.801675 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:12:31 crc kubenswrapper[4772]: E1122 13:12:31.801934 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:12:46 crc kubenswrapper[4772]: I1122 13:12:46.413789 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:12:46 crc kubenswrapper[4772]: E1122 13:12:46.415113 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.522476 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5n9qq"] Nov 22 13:12:55 crc kubenswrapper[4772]: E1122 13:12:55.523942 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819a5030-8de8-4772-86bf-9d6fb6f8de4e" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.523960 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="819a5030-8de8-4772-86bf-9d6fb6f8de4e" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.524233 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="819a5030-8de8-4772-86bf-9d6fb6f8de4e" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.525762 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.558770 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5n9qq"] Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.661373 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-utilities\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.661498 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b9lc\" (UniqueName: \"kubernetes.io/projected/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-kube-api-access-7b9lc\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.661578 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-catalog-content\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.763412 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-catalog-content\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.763892 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-utilities\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.764206 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b9lc\" (UniqueName: \"kubernetes.io/projected/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-kube-api-access-7b9lc\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.764288 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-catalog-content\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.764374 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-utilities\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.792038 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b9lc\" (UniqueName: \"kubernetes.io/projected/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-kube-api-access-7b9lc\") pod \"redhat-operators-5n9qq\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:55 crc kubenswrapper[4772]: I1122 13:12:55.855528 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:12:56 crc kubenswrapper[4772]: I1122 13:12:56.389586 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5n9qq"] Nov 22 13:12:57 crc kubenswrapper[4772]: I1122 13:12:57.110797 4772 generic.go:334] "Generic (PLEG): container finished" podID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerID="a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0" exitCode=0 Nov 22 13:12:57 crc kubenswrapper[4772]: I1122 13:12:57.110914 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n9qq" event={"ID":"0e2f5e06-f970-4bb4-90e8-3f29fe25c976","Type":"ContainerDied","Data":"a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0"} Nov 22 13:12:57 crc kubenswrapper[4772]: I1122 13:12:57.112321 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n9qq" event={"ID":"0e2f5e06-f970-4bb4-90e8-3f29fe25c976","Type":"ContainerStarted","Data":"5dd0996e4655c0450161adf1b6905bf972ae10b139653537742f6a484c87a0c8"} Nov 22 13:12:57 crc kubenswrapper[4772]: I1122 13:12:57.113466 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 13:12:57 crc kubenswrapper[4772]: I1122 13:12:57.413809 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:12:57 crc kubenswrapper[4772]: E1122 13:12:57.414472 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:12:58 crc kubenswrapper[4772]: I1122 13:12:58.125110 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n9qq" event={"ID":"0e2f5e06-f970-4bb4-90e8-3f29fe25c976","Type":"ContainerStarted","Data":"1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4"} Nov 22 13:13:04 crc kubenswrapper[4772]: I1122 13:13:04.212767 4772 generic.go:334] "Generic (PLEG): container finished" podID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerID="1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4" exitCode=0 Nov 22 13:13:04 crc kubenswrapper[4772]: I1122 13:13:04.212909 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n9qq" event={"ID":"0e2f5e06-f970-4bb4-90e8-3f29fe25c976","Type":"ContainerDied","Data":"1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4"} Nov 22 13:13:05 crc kubenswrapper[4772]: I1122 13:13:05.234062 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n9qq" event={"ID":"0e2f5e06-f970-4bb4-90e8-3f29fe25c976","Type":"ContainerStarted","Data":"1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418"} Nov 22 13:13:05 crc kubenswrapper[4772]: I1122 13:13:05.261972 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5n9qq" podStartSLOduration=2.7385563790000003 podStartE2EDuration="10.261916133s" podCreationTimestamp="2025-11-22 13:12:55 +0000 UTC" firstStartedPulling="2025-11-22 13:12:57.113200302 +0000 UTC m=+9297.352644796" lastFinishedPulling="2025-11-22 13:13:04.636560046 +0000 UTC m=+9304.876004550" observedRunningTime="2025-11-22 13:13:05.258585121 +0000 UTC m=+9305.498029645" watchObservedRunningTime="2025-11-22 13:13:05.261916133 +0000 UTC m=+9305.501360617" Nov 22 13:13:05 crc kubenswrapper[4772]: I1122 13:13:05.856949 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:13:05 crc kubenswrapper[4772]: I1122 13:13:05.857013 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:13:06 crc kubenswrapper[4772]: I1122 13:13:06.914345 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n9qq" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="registry-server" probeResult="failure" output=< Nov 22 13:13:06 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 13:13:06 crc kubenswrapper[4772]: > Nov 22 13:13:10 crc kubenswrapper[4772]: I1122 13:13:10.413536 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:13:10 crc kubenswrapper[4772]: E1122 13:13:10.414375 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:13:16 crc kubenswrapper[4772]: I1122 13:13:16.989212 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n9qq" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="registry-server" probeResult="failure" output=< Nov 22 13:13:16 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 13:13:16 crc kubenswrapper[4772]: > Nov 22 13:13:21 crc kubenswrapper[4772]: I1122 13:13:21.424203 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:13:21 crc kubenswrapper[4772]: E1122 13:13:21.425117 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:13:26 crc kubenswrapper[4772]: I1122 13:13:26.917688 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5n9qq" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="registry-server" probeResult="failure" output=< Nov 22 13:13:26 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 13:13:26 crc kubenswrapper[4772]: > Nov 22 13:13:32 crc kubenswrapper[4772]: I1122 13:13:32.437936 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:13:32 crc kubenswrapper[4772]: E1122 13:13:32.438911 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:13:35 crc kubenswrapper[4772]: I1122 13:13:35.916595 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:13:35 crc kubenswrapper[4772]: I1122 13:13:35.991018 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:13:36 crc kubenswrapper[4772]: I1122 13:13:36.160252 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5n9qq"] Nov 22 13:13:37 crc kubenswrapper[4772]: I1122 13:13:37.653636 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5n9qq" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="registry-server" containerID="cri-o://1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418" gracePeriod=2 Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.227281 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.255144 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b9lc\" (UniqueName: \"kubernetes.io/projected/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-kube-api-access-7b9lc\") pod \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.255395 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-utilities\") pod \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.255564 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-catalog-content\") pod \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\" (UID: \"0e2f5e06-f970-4bb4-90e8-3f29fe25c976\") " Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.256567 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-utilities" (OuterVolumeSpecName: "utilities") pod "0e2f5e06-f970-4bb4-90e8-3f29fe25c976" (UID: "0e2f5e06-f970-4bb4-90e8-3f29fe25c976"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.263222 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-kube-api-access-7b9lc" (OuterVolumeSpecName: "kube-api-access-7b9lc") pod "0e2f5e06-f970-4bb4-90e8-3f29fe25c976" (UID: "0e2f5e06-f970-4bb4-90e8-3f29fe25c976"). InnerVolumeSpecName "kube-api-access-7b9lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.357969 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b9lc\" (UniqueName: \"kubernetes.io/projected/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-kube-api-access-7b9lc\") on node \"crc\" DevicePath \"\"" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.357999 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.364330 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e2f5e06-f970-4bb4-90e8-3f29fe25c976" (UID: "0e2f5e06-f970-4bb4-90e8-3f29fe25c976"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.460435 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2f5e06-f970-4bb4-90e8-3f29fe25c976-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.670452 4772 generic.go:334] "Generic (PLEG): container finished" podID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerID="1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418" exitCode=0 Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.670496 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n9qq" event={"ID":"0e2f5e06-f970-4bb4-90e8-3f29fe25c976","Type":"ContainerDied","Data":"1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418"} Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.670529 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5n9qq" event={"ID":"0e2f5e06-f970-4bb4-90e8-3f29fe25c976","Type":"ContainerDied","Data":"5dd0996e4655c0450161adf1b6905bf972ae10b139653537742f6a484c87a0c8"} Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.670555 4772 scope.go:117] "RemoveContainer" containerID="1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.671942 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5n9qq" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.706693 4772 scope.go:117] "RemoveContainer" containerID="1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.725981 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5n9qq"] Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.733739 4772 scope.go:117] "RemoveContainer" containerID="a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.738983 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5n9qq"] Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.807402 4772 scope.go:117] "RemoveContainer" containerID="1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418" Nov 22 13:13:38 crc kubenswrapper[4772]: E1122 13:13:38.807868 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418\": container with ID starting with 1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418 not found: ID does not exist" containerID="1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.807899 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418"} err="failed to get container status \"1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418\": rpc error: code = NotFound desc = could not find container \"1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418\": container with ID starting with 1011e950daf5d0032e0f0188ae886c331fb43b6fc827d373c96796bb1ebf0418 not found: ID does not exist" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.807924 4772 scope.go:117] "RemoveContainer" containerID="1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4" Nov 22 13:13:38 crc kubenswrapper[4772]: E1122 13:13:38.808342 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4\": container with ID starting with 1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4 not found: ID does not exist" containerID="1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.808364 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4"} err="failed to get container status \"1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4\": rpc error: code = NotFound desc = could not find container \"1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4\": container with ID starting with 1a76d0c33d6ac027936a1214f6cfe1f6ebbac600925e8aadc4d996c5c92027d4 not found: ID does not exist" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.808378 4772 scope.go:117] "RemoveContainer" containerID="a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0" Nov 22 13:13:38 crc kubenswrapper[4772]: E1122 13:13:38.808596 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0\": container with ID starting with a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0 not found: ID does not exist" containerID="a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0" Nov 22 13:13:38 crc kubenswrapper[4772]: I1122 13:13:38.808623 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0"} err="failed to get container status \"a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0\": rpc error: code = NotFound desc = could not find container \"a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0\": container with ID starting with a92cea6d52f67d1f18b923d31aa792f2e6fa17c1755d54c91f6b92eda26f17c0 not found: ID does not exist" Nov 22 13:13:39 crc kubenswrapper[4772]: I1122 13:13:39.428292 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" path="/var/lib/kubelet/pods/0e2f5e06-f970-4bb4-90e8-3f29fe25c976/volumes" Nov 22 13:13:44 crc kubenswrapper[4772]: I1122 13:13:44.435174 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:13:44 crc kubenswrapper[4772]: E1122 13:13:44.436404 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:13:55 crc kubenswrapper[4772]: I1122 13:13:55.415793 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:13:55 crc kubenswrapper[4772]: E1122 13:13:55.418218 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.228947 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-27gls/must-gather-4xhjb"] Nov 22 13:13:58 crc kubenswrapper[4772]: E1122 13:13:58.230191 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="registry-server" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.230205 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="registry-server" Nov 22 13:13:58 crc kubenswrapper[4772]: E1122 13:13:58.230217 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="extract-utilities" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.230226 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="extract-utilities" Nov 22 13:13:58 crc kubenswrapper[4772]: E1122 13:13:58.230253 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="extract-content" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.230261 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="extract-content" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.230482 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2f5e06-f970-4bb4-90e8-3f29fe25c976" containerName="registry-server" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.231635 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.234728 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-27gls"/"openshift-service-ca.crt" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.240508 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-27gls"/"kube-root-ca.crt" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.248738 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-27gls"/"default-dockercfg-2vx2r" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.250187 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-27gls/must-gather-4xhjb"] Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.254932 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46gvl\" (UniqueName: \"kubernetes.io/projected/3eaf5b5d-ff81-49f8-accc-2cce8543916a-kube-api-access-46gvl\") pod \"must-gather-4xhjb\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.255174 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3eaf5b5d-ff81-49f8-accc-2cce8543916a-must-gather-output\") pod \"must-gather-4xhjb\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.357471 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3eaf5b5d-ff81-49f8-accc-2cce8543916a-must-gather-output\") pod \"must-gather-4xhjb\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.357598 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46gvl\" (UniqueName: \"kubernetes.io/projected/3eaf5b5d-ff81-49f8-accc-2cce8543916a-kube-api-access-46gvl\") pod \"must-gather-4xhjb\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.357986 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3eaf5b5d-ff81-49f8-accc-2cce8543916a-must-gather-output\") pod \"must-gather-4xhjb\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.379124 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46gvl\" (UniqueName: \"kubernetes.io/projected/3eaf5b5d-ff81-49f8-accc-2cce8543916a-kube-api-access-46gvl\") pod \"must-gather-4xhjb\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:58 crc kubenswrapper[4772]: I1122 13:13:58.551082 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:13:59 crc kubenswrapper[4772]: I1122 13:13:59.070398 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-27gls/must-gather-4xhjb"] Nov 22 13:13:59 crc kubenswrapper[4772]: I1122 13:13:59.953266 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/must-gather-4xhjb" event={"ID":"3eaf5b5d-ff81-49f8-accc-2cce8543916a","Type":"ContainerStarted","Data":"85e5fac665ebcc12a87e6672ef85efb08e5de73fb2438edf35ba342bb3e33ed2"} Nov 22 13:14:07 crc kubenswrapper[4772]: I1122 13:14:07.036684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/must-gather-4xhjb" event={"ID":"3eaf5b5d-ff81-49f8-accc-2cce8543916a","Type":"ContainerStarted","Data":"ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36"} Nov 22 13:14:07 crc kubenswrapper[4772]: I1122 13:14:07.037687 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/must-gather-4xhjb" event={"ID":"3eaf5b5d-ff81-49f8-accc-2cce8543916a","Type":"ContainerStarted","Data":"96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81"} Nov 22 13:14:07 crc kubenswrapper[4772]: I1122 13:14:07.054307 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-27gls/must-gather-4xhjb" podStartSLOduration=1.800967051 podStartE2EDuration="9.054283953s" podCreationTimestamp="2025-11-22 13:13:58 +0000 UTC" firstStartedPulling="2025-11-22 13:13:59.066722835 +0000 UTC m=+9359.306167329" lastFinishedPulling="2025-11-22 13:14:06.320039737 +0000 UTC m=+9366.559484231" observedRunningTime="2025-11-22 13:14:07.050955141 +0000 UTC m=+9367.290399665" watchObservedRunningTime="2025-11-22 13:14:07.054283953 +0000 UTC m=+9367.293728447" Nov 22 13:14:10 crc kubenswrapper[4772]: I1122 13:14:10.414997 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:14:10 crc kubenswrapper[4772]: E1122 13:14:10.416913 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:14:11 crc kubenswrapper[4772]: I1122 13:14:11.855214 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-27gls/crc-debug-xqk9v"] Nov 22 13:14:11 crc kubenswrapper[4772]: I1122 13:14:11.857291 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:11 crc kubenswrapper[4772]: I1122 13:14:11.936627 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-host\") pod \"crc-debug-xqk9v\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:11 crc kubenswrapper[4772]: I1122 13:14:11.936699 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd7gt\" (UniqueName: \"kubernetes.io/projected/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-kube-api-access-nd7gt\") pod \"crc-debug-xqk9v\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:12 crc kubenswrapper[4772]: I1122 13:14:12.039677 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-host\") pod \"crc-debug-xqk9v\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:12 crc kubenswrapper[4772]: I1122 13:14:12.039875 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-host\") pod \"crc-debug-xqk9v\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:12 crc kubenswrapper[4772]: I1122 13:14:12.040231 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd7gt\" (UniqueName: \"kubernetes.io/projected/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-kube-api-access-nd7gt\") pod \"crc-debug-xqk9v\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:12 crc kubenswrapper[4772]: I1122 13:14:12.074817 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd7gt\" (UniqueName: \"kubernetes.io/projected/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-kube-api-access-nd7gt\") pod \"crc-debug-xqk9v\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:12 crc kubenswrapper[4772]: I1122 13:14:12.177432 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:12 crc kubenswrapper[4772]: W1122 13:14:12.229624 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce98e5cb_61a5_470f_8d88_e49f44abcb1b.slice/crio-18375d3ca075bcd5bb1ebcff8ee6d40e7d20f0782769704a0ec62f48dd04dd22 WatchSource:0}: Error finding container 18375d3ca075bcd5bb1ebcff8ee6d40e7d20f0782769704a0ec62f48dd04dd22: Status 404 returned error can't find the container with id 18375d3ca075bcd5bb1ebcff8ee6d40e7d20f0782769704a0ec62f48dd04dd22 Nov 22 13:14:13 crc kubenswrapper[4772]: I1122 13:14:13.122473 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/crc-debug-xqk9v" event={"ID":"ce98e5cb-61a5-470f-8d88-e49f44abcb1b","Type":"ContainerStarted","Data":"18375d3ca075bcd5bb1ebcff8ee6d40e7d20f0782769704a0ec62f48dd04dd22"} Nov 22 13:14:22 crc kubenswrapper[4772]: I1122 13:14:22.415576 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:14:22 crc kubenswrapper[4772]: E1122 13:14:22.416931 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:14:27 crc kubenswrapper[4772]: I1122 13:14:27.321802 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/crc-debug-xqk9v" event={"ID":"ce98e5cb-61a5-470f-8d88-e49f44abcb1b","Type":"ContainerStarted","Data":"48a178050cd9ebf6ea5cc08e49745a20fd7f82cd43fe8053a7655ef4cbd3f684"} Nov 22 13:14:27 crc kubenswrapper[4772]: I1122 13:14:27.345780 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-27gls/crc-debug-xqk9v" podStartSLOduration=1.642551792 podStartE2EDuration="16.345758467s" podCreationTimestamp="2025-11-22 13:14:11 +0000 UTC" firstStartedPulling="2025-11-22 13:14:12.232677004 +0000 UTC m=+9372.472121498" lastFinishedPulling="2025-11-22 13:14:26.935883669 +0000 UTC m=+9387.175328173" observedRunningTime="2025-11-22 13:14:27.342751272 +0000 UTC m=+9387.582195766" watchObservedRunningTime="2025-11-22 13:14:27.345758467 +0000 UTC m=+9387.585202961" Nov 22 13:14:34 crc kubenswrapper[4772]: I1122 13:14:34.413766 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:14:34 crc kubenswrapper[4772]: E1122 13:14:34.414561 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:14:45 crc kubenswrapper[4772]: I1122 13:14:45.414165 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:14:45 crc kubenswrapper[4772]: E1122 13:14:45.415097 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:14:57 crc kubenswrapper[4772]: I1122 13:14:57.708954 4772 generic.go:334] "Generic (PLEG): container finished" podID="ce98e5cb-61a5-470f-8d88-e49f44abcb1b" containerID="48a178050cd9ebf6ea5cc08e49745a20fd7f82cd43fe8053a7655ef4cbd3f684" exitCode=0 Nov 22 13:14:57 crc kubenswrapper[4772]: I1122 13:14:57.709210 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/crc-debug-xqk9v" event={"ID":"ce98e5cb-61a5-470f-8d88-e49f44abcb1b","Type":"ContainerDied","Data":"48a178050cd9ebf6ea5cc08e49745a20fd7f82cd43fe8053a7655ef4cbd3f684"} Nov 22 13:14:58 crc kubenswrapper[4772]: I1122 13:14:58.873271 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:14:58 crc kubenswrapper[4772]: I1122 13:14:58.911906 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-27gls/crc-debug-xqk9v"] Nov 22 13:14:58 crc kubenswrapper[4772]: I1122 13:14:58.924380 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-27gls/crc-debug-xqk9v"] Nov 22 13:14:58 crc kubenswrapper[4772]: I1122 13:14:58.994161 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd7gt\" (UniqueName: \"kubernetes.io/projected/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-kube-api-access-nd7gt\") pod \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " Nov 22 13:14:58 crc kubenswrapper[4772]: I1122 13:14:58.994276 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-host\") pod \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\" (UID: \"ce98e5cb-61a5-470f-8d88-e49f44abcb1b\") " Nov 22 13:14:58 crc kubenswrapper[4772]: I1122 13:14:58.994442 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-host" (OuterVolumeSpecName: "host") pod "ce98e5cb-61a5-470f-8d88-e49f44abcb1b" (UID: "ce98e5cb-61a5-470f-8d88-e49f44abcb1b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 13:14:58 crc kubenswrapper[4772]: I1122 13:14:58.995110 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-host\") on node \"crc\" DevicePath \"\"" Nov 22 13:14:59 crc kubenswrapper[4772]: I1122 13:14:59.019529 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-kube-api-access-nd7gt" (OuterVolumeSpecName: "kube-api-access-nd7gt") pod "ce98e5cb-61a5-470f-8d88-e49f44abcb1b" (UID: "ce98e5cb-61a5-470f-8d88-e49f44abcb1b"). InnerVolumeSpecName "kube-api-access-nd7gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:14:59 crc kubenswrapper[4772]: I1122 13:14:59.097761 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd7gt\" (UniqueName: \"kubernetes.io/projected/ce98e5cb-61a5-470f-8d88-e49f44abcb1b-kube-api-access-nd7gt\") on node \"crc\" DevicePath \"\"" Nov 22 13:14:59 crc kubenswrapper[4772]: I1122 13:14:59.413765 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:14:59 crc kubenswrapper[4772]: E1122 13:14:59.414314 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:14:59 crc kubenswrapper[4772]: I1122 13:14:59.428096 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce98e5cb-61a5-470f-8d88-e49f44abcb1b" path="/var/lib/kubelet/pods/ce98e5cb-61a5-470f-8d88-e49f44abcb1b/volumes" Nov 22 13:14:59 crc kubenswrapper[4772]: I1122 13:14:59.733565 4772 scope.go:117] "RemoveContainer" containerID="48a178050cd9ebf6ea5cc08e49745a20fd7f82cd43fe8053a7655ef4cbd3f684" Nov 22 13:14:59 crc kubenswrapper[4772]: I1122 13:14:59.733617 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-xqk9v" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.167005 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m"] Nov 22 13:15:00 crc kubenswrapper[4772]: E1122 13:15:00.171644 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce98e5cb-61a5-470f-8d88-e49f44abcb1b" containerName="container-00" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.171694 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce98e5cb-61a5-470f-8d88-e49f44abcb1b" containerName="container-00" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.172174 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce98e5cb-61a5-470f-8d88-e49f44abcb1b" containerName="container-00" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.173679 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.183005 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.183024 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.183411 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m"] Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.232452 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/611bca37-b5cf-422b-82ae-0fde686493ba-secret-volume\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.232733 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxbl7\" (UniqueName: \"kubernetes.io/projected/611bca37-b5cf-422b-82ae-0fde686493ba-kube-api-access-pxbl7\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.232794 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/611bca37-b5cf-422b-82ae-0fde686493ba-config-volume\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.256492 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-27gls/crc-debug-snffq"] Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.258872 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.333791 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-host\") pod \"crc-debug-snffq\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.333933 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqgq\" (UniqueName: \"kubernetes.io/projected/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-kube-api-access-ppqgq\") pod \"crc-debug-snffq\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.334327 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxbl7\" (UniqueName: \"kubernetes.io/projected/611bca37-b5cf-422b-82ae-0fde686493ba-kube-api-access-pxbl7\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.334922 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/611bca37-b5cf-422b-82ae-0fde686493ba-config-volume\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.336712 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/611bca37-b5cf-422b-82ae-0fde686493ba-config-volume\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.336919 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/611bca37-b5cf-422b-82ae-0fde686493ba-secret-volume\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.352742 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/611bca37-b5cf-422b-82ae-0fde686493ba-secret-volume\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.354870 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxbl7\" (UniqueName: \"kubernetes.io/projected/611bca37-b5cf-422b-82ae-0fde686493ba-kube-api-access-pxbl7\") pod \"collect-profiles-29396955-lpk2m\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.439546 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-host\") pod \"crc-debug-snffq\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.439597 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqgq\" (UniqueName: \"kubernetes.io/projected/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-kube-api-access-ppqgq\") pod \"crc-debug-snffq\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.439682 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-host\") pod \"crc-debug-snffq\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.457488 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqgq\" (UniqueName: \"kubernetes.io/projected/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-kube-api-access-ppqgq\") pod \"crc-debug-snffq\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.516730 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.580800 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:00 crc kubenswrapper[4772]: I1122 13:15:00.764733 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/crc-debug-snffq" event={"ID":"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b","Type":"ContainerStarted","Data":"43fae62b67db678dadbb3f761c00f749de8e043f0c61e09ef55b483d95cb7d80"} Nov 22 13:15:01 crc kubenswrapper[4772]: I1122 13:15:01.121519 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m"] Nov 22 13:15:01 crc kubenswrapper[4772]: I1122 13:15:01.791395 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" event={"ID":"611bca37-b5cf-422b-82ae-0fde686493ba","Type":"ContainerStarted","Data":"32744d72b4acd6ee646fccd3f35bc775da260a2e1b40ea859a442349f8b78e51"} Nov 22 13:15:01 crc kubenswrapper[4772]: I1122 13:15:01.794525 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/crc-debug-snffq" event={"ID":"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b","Type":"ContainerDied","Data":"b03756e39805c0356c0c1d7ef01e3baa048bf2a0062df6ca4e280418f67d79da"} Nov 22 13:15:01 crc kubenswrapper[4772]: I1122 13:15:01.794392 4772 generic.go:334] "Generic (PLEG): container finished" podID="1d77bfb9-c572-4141-9d1c-baf8b8ebff9b" containerID="b03756e39805c0356c0c1d7ef01e3baa048bf2a0062df6ca4e280418f67d79da" exitCode=1 Nov 22 13:15:01 crc kubenswrapper[4772]: I1122 13:15:01.848384 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-27gls/crc-debug-snffq"] Nov 22 13:15:01 crc kubenswrapper[4772]: I1122 13:15:01.862033 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-27gls/crc-debug-snffq"] Nov 22 13:15:02 crc kubenswrapper[4772]: I1122 13:15:02.809461 4772 generic.go:334] "Generic (PLEG): container finished" podID="611bca37-b5cf-422b-82ae-0fde686493ba" containerID="18649a4f2c8aa7766e89496ef24fbc36f7e6114906ad2ae5df35a976afce5b51" exitCode=0 Nov 22 13:15:02 crc kubenswrapper[4772]: I1122 13:15:02.809667 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" event={"ID":"611bca37-b5cf-422b-82ae-0fde686493ba","Type":"ContainerDied","Data":"18649a4f2c8aa7766e89496ef24fbc36f7e6114906ad2ae5df35a976afce5b51"} Nov 22 13:15:02 crc kubenswrapper[4772]: I1122 13:15:02.931421 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.062825 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-host\") pod \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.062951 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppqgq\" (UniqueName: \"kubernetes.io/projected/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-kube-api-access-ppqgq\") pod \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\" (UID: \"1d77bfb9-c572-4141-9d1c-baf8b8ebff9b\") " Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.062974 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-host" (OuterVolumeSpecName: "host") pod "1d77bfb9-c572-4141-9d1c-baf8b8ebff9b" (UID: "1d77bfb9-c572-4141-9d1c-baf8b8ebff9b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.063831 4772 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-host\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.082807 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-kube-api-access-ppqgq" (OuterVolumeSpecName: "kube-api-access-ppqgq") pod "1d77bfb9-c572-4141-9d1c-baf8b8ebff9b" (UID: "1d77bfb9-c572-4141-9d1c-baf8b8ebff9b"). InnerVolumeSpecName "kube-api-access-ppqgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.166309 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppqgq\" (UniqueName: \"kubernetes.io/projected/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b-kube-api-access-ppqgq\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.430779 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d77bfb9-c572-4141-9d1c-baf8b8ebff9b" path="/var/lib/kubelet/pods/1d77bfb9-c572-4141-9d1c-baf8b8ebff9b/volumes" Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.821314 4772 scope.go:117] "RemoveContainer" containerID="b03756e39805c0356c0c1d7ef01e3baa048bf2a0062df6ca4e280418f67d79da" Nov 22 13:15:03 crc kubenswrapper[4772]: I1122 13:15:03.821356 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/crc-debug-snffq" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.280137 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.394942 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/611bca37-b5cf-422b-82ae-0fde686493ba-secret-volume\") pod \"611bca37-b5cf-422b-82ae-0fde686493ba\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.395084 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/611bca37-b5cf-422b-82ae-0fde686493ba-config-volume\") pod \"611bca37-b5cf-422b-82ae-0fde686493ba\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.395398 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxbl7\" (UniqueName: \"kubernetes.io/projected/611bca37-b5cf-422b-82ae-0fde686493ba-kube-api-access-pxbl7\") pod \"611bca37-b5cf-422b-82ae-0fde686493ba\" (UID: \"611bca37-b5cf-422b-82ae-0fde686493ba\") " Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.396458 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611bca37-b5cf-422b-82ae-0fde686493ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "611bca37-b5cf-422b-82ae-0fde686493ba" (UID: "611bca37-b5cf-422b-82ae-0fde686493ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.403848 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611bca37-b5cf-422b-82ae-0fde686493ba-kube-api-access-pxbl7" (OuterVolumeSpecName: "kube-api-access-pxbl7") pod "611bca37-b5cf-422b-82ae-0fde686493ba" (UID: "611bca37-b5cf-422b-82ae-0fde686493ba"). InnerVolumeSpecName "kube-api-access-pxbl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.405277 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611bca37-b5cf-422b-82ae-0fde686493ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "611bca37-b5cf-422b-82ae-0fde686493ba" (UID: "611bca37-b5cf-422b-82ae-0fde686493ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.500199 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/611bca37-b5cf-422b-82ae-0fde686493ba-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.500240 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/611bca37-b5cf-422b-82ae-0fde686493ba-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.500254 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxbl7\" (UniqueName: \"kubernetes.io/projected/611bca37-b5cf-422b-82ae-0fde686493ba-kube-api-access-pxbl7\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.836672 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" event={"ID":"611bca37-b5cf-422b-82ae-0fde686493ba","Type":"ContainerDied","Data":"32744d72b4acd6ee646fccd3f35bc775da260a2e1b40ea859a442349f8b78e51"} Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.838006 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32744d72b4acd6ee646fccd3f35bc775da260a2e1b40ea859a442349f8b78e51" Nov 22 13:15:04 crc kubenswrapper[4772]: I1122 13:15:04.836690 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396955-lpk2m" Nov 22 13:15:05 crc kubenswrapper[4772]: I1122 13:15:05.358081 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh"] Nov 22 13:15:05 crc kubenswrapper[4772]: I1122 13:15:05.371661 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396910-ww9lh"] Nov 22 13:15:05 crc kubenswrapper[4772]: I1122 13:15:05.448701 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bbb8cc-2577-4f70-9a10-682232f0b57b" path="/var/lib/kubelet/pods/32bbb8cc-2577-4f70-9a10-682232f0b57b/volumes" Nov 22 13:15:13 crc kubenswrapper[4772]: I1122 13:15:13.413847 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:15:13 crc kubenswrapper[4772]: E1122 13:15:13.415096 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:15:25 crc kubenswrapper[4772]: I1122 13:15:25.414144 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:15:25 crc kubenswrapper[4772]: E1122 13:15:25.415290 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:15:26 crc kubenswrapper[4772]: I1122 13:15:26.390510 4772 scope.go:117] "RemoveContainer" containerID="94e68aaaf2f221e0f9ef78189cd4909dd7b3d7c85c2c1fa032ac350c1b3086cd" Nov 22 13:15:38 crc kubenswrapper[4772]: I1122 13:15:38.413834 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:15:38 crc kubenswrapper[4772]: E1122 13:15:38.414961 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.085363 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfmh"] Nov 22 13:15:40 crc kubenswrapper[4772]: E1122 13:15:40.086672 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611bca37-b5cf-422b-82ae-0fde686493ba" containerName="collect-profiles" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.086693 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="611bca37-b5cf-422b-82ae-0fde686493ba" containerName="collect-profiles" Nov 22 13:15:40 crc kubenswrapper[4772]: E1122 13:15:40.086732 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d77bfb9-c572-4141-9d1c-baf8b8ebff9b" containerName="container-00" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.086742 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d77bfb9-c572-4141-9d1c-baf8b8ebff9b" containerName="container-00" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.087114 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d77bfb9-c572-4141-9d1c-baf8b8ebff9b" containerName="container-00" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.087169 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="611bca37-b5cf-422b-82ae-0fde686493ba" containerName="collect-profiles" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.090022 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.119383 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfmh"] Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.230128 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-catalog-content\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.230534 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nncq\" (UniqueName: \"kubernetes.io/projected/61e5aac0-e74c-4259-878b-8ad4c3521644-kube-api-access-5nncq\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.230593 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-utilities\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.333232 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nncq\" (UniqueName: \"kubernetes.io/projected/61e5aac0-e74c-4259-878b-8ad4c3521644-kube-api-access-5nncq\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.333316 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-utilities\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.333429 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-catalog-content\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.334179 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-catalog-content\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.334268 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-utilities\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.357580 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nncq\" (UniqueName: \"kubernetes.io/projected/61e5aac0-e74c-4259-878b-8ad4c3521644-kube-api-access-5nncq\") pod \"redhat-marketplace-rsfmh\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.419740 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:40 crc kubenswrapper[4772]: I1122 13:15:40.996153 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfmh"] Nov 22 13:15:41 crc kubenswrapper[4772]: I1122 13:15:41.342659 4772 generic.go:334] "Generic (PLEG): container finished" podID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerID="df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b" exitCode=0 Nov 22 13:15:41 crc kubenswrapper[4772]: I1122 13:15:41.342731 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfmh" event={"ID":"61e5aac0-e74c-4259-878b-8ad4c3521644","Type":"ContainerDied","Data":"df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b"} Nov 22 13:15:41 crc kubenswrapper[4772]: I1122 13:15:41.342824 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfmh" event={"ID":"61e5aac0-e74c-4259-878b-8ad4c3521644","Type":"ContainerStarted","Data":"85a9fe97b6b707a5571a406f82f5bf3a4038cf38bfdb7f70bec3e21b4c361119"} Nov 22 13:15:42 crc kubenswrapper[4772]: I1122 13:15:42.362764 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfmh" event={"ID":"61e5aac0-e74c-4259-878b-8ad4c3521644","Type":"ContainerStarted","Data":"2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec"} Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.387650 4772 generic.go:334] "Generic (PLEG): container finished" podID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerID="2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec" exitCode=0 Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.387818 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfmh" event={"ID":"61e5aac0-e74c-4259-878b-8ad4c3521644","Type":"ContainerDied","Data":"2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec"} Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.469123 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5fdlq"] Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.477981 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.481588 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fdlq"] Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.537706 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-catalog-content\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.538113 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h4g8\" (UniqueName: \"kubernetes.io/projected/29995f88-0b42-452f-badd-a87db615eff6-kube-api-access-4h4g8\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.539029 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-utilities\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.641688 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-utilities\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.641854 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-catalog-content\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.641904 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h4g8\" (UniqueName: \"kubernetes.io/projected/29995f88-0b42-452f-badd-a87db615eff6-kube-api-access-4h4g8\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.642473 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-utilities\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.644511 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-catalog-content\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.667696 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h4g8\" (UniqueName: \"kubernetes.io/projected/29995f88-0b42-452f-badd-a87db615eff6-kube-api-access-4h4g8\") pod \"certified-operators-5fdlq\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:43 crc kubenswrapper[4772]: I1122 13:15:43.805771 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:44 crc kubenswrapper[4772]: I1122 13:15:44.404571 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fdlq"] Nov 22 13:15:44 crc kubenswrapper[4772]: W1122 13:15:44.406305 4772 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29995f88_0b42_452f_badd_a87db615eff6.slice/crio-4e9f4cda6fa25b64d3d8df9c9aa9a22cc23798136eed02c6b707f86ecf7ed069 WatchSource:0}: Error finding container 4e9f4cda6fa25b64d3d8df9c9aa9a22cc23798136eed02c6b707f86ecf7ed069: Status 404 returned error can't find the container with id 4e9f4cda6fa25b64d3d8df9c9aa9a22cc23798136eed02c6b707f86ecf7ed069 Nov 22 13:15:44 crc kubenswrapper[4772]: I1122 13:15:44.410585 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfmh" event={"ID":"61e5aac0-e74c-4259-878b-8ad4c3521644","Type":"ContainerStarted","Data":"457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd"} Nov 22 13:15:44 crc kubenswrapper[4772]: I1122 13:15:44.434782 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rsfmh" podStartSLOduration=1.919828358 podStartE2EDuration="4.434763058s" podCreationTimestamp="2025-11-22 13:15:40 +0000 UTC" firstStartedPulling="2025-11-22 13:15:41.345099164 +0000 UTC m=+9461.584543658" lastFinishedPulling="2025-11-22 13:15:43.860033864 +0000 UTC m=+9464.099478358" observedRunningTime="2025-11-22 13:15:44.430071443 +0000 UTC m=+9464.669515937" watchObservedRunningTime="2025-11-22 13:15:44.434763058 +0000 UTC m=+9464.674207542" Nov 22 13:15:45 crc kubenswrapper[4772]: I1122 13:15:45.426504 4772 generic.go:334] "Generic (PLEG): container finished" podID="29995f88-0b42-452f-badd-a87db615eff6" containerID="5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5" exitCode=0 Nov 22 13:15:45 crc kubenswrapper[4772]: I1122 13:15:45.430779 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fdlq" event={"ID":"29995f88-0b42-452f-badd-a87db615eff6","Type":"ContainerDied","Data":"5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5"} Nov 22 13:15:45 crc kubenswrapper[4772]: I1122 13:15:45.430828 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fdlq" event={"ID":"29995f88-0b42-452f-badd-a87db615eff6","Type":"ContainerStarted","Data":"4e9f4cda6fa25b64d3d8df9c9aa9a22cc23798136eed02c6b707f86ecf7ed069"} Nov 22 13:15:46 crc kubenswrapper[4772]: I1122 13:15:46.442616 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fdlq" event={"ID":"29995f88-0b42-452f-badd-a87db615eff6","Type":"ContainerStarted","Data":"ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c"} Nov 22 13:15:48 crc kubenswrapper[4772]: E1122 13:15:48.035919 4772 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29995f88_0b42_452f_badd_a87db615eff6.slice/crio-ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c.scope\": RecentStats: unable to find data in memory cache]" Nov 22 13:15:48 crc kubenswrapper[4772]: I1122 13:15:48.473308 4772 generic.go:334] "Generic (PLEG): container finished" podID="29995f88-0b42-452f-badd-a87db615eff6" containerID="ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c" exitCode=0 Nov 22 13:15:48 crc kubenswrapper[4772]: I1122 13:15:48.473374 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fdlq" event={"ID":"29995f88-0b42-452f-badd-a87db615eff6","Type":"ContainerDied","Data":"ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c"} Nov 22 13:15:49 crc kubenswrapper[4772]: I1122 13:15:49.488265 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fdlq" event={"ID":"29995f88-0b42-452f-badd-a87db615eff6","Type":"ContainerStarted","Data":"9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29"} Nov 22 13:15:49 crc kubenswrapper[4772]: I1122 13:15:49.516415 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5fdlq" podStartSLOduration=3.097184736 podStartE2EDuration="6.5163957s" podCreationTimestamp="2025-11-22 13:15:43 +0000 UTC" firstStartedPulling="2025-11-22 13:15:45.432149111 +0000 UTC m=+9465.671593605" lastFinishedPulling="2025-11-22 13:15:48.851360075 +0000 UTC m=+9469.090804569" observedRunningTime="2025-11-22 13:15:49.512981636 +0000 UTC m=+9469.752426130" watchObservedRunningTime="2025-11-22 13:15:49.5163957 +0000 UTC m=+9469.755840194" Nov 22 13:15:50 crc kubenswrapper[4772]: I1122 13:15:50.413869 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:15:50 crc kubenswrapper[4772]: E1122 13:15:50.414636 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:15:50 crc kubenswrapper[4772]: I1122 13:15:50.420548 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:50 crc kubenswrapper[4772]: I1122 13:15:50.421092 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:51 crc kubenswrapper[4772]: I1122 13:15:51.270525 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:51 crc kubenswrapper[4772]: I1122 13:15:51.609554 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:52 crc kubenswrapper[4772]: I1122 13:15:52.655843 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfmh"] Nov 22 13:15:53 crc kubenswrapper[4772]: I1122 13:15:53.535283 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rsfmh" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="registry-server" containerID="cri-o://457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd" gracePeriod=2 Nov 22 13:15:53 crc kubenswrapper[4772]: I1122 13:15:53.806289 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:53 crc kubenswrapper[4772]: I1122 13:15:53.808188 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:53 crc kubenswrapper[4772]: I1122 13:15:53.882337 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.122878 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.230198 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-catalog-content\") pod \"61e5aac0-e74c-4259-878b-8ad4c3521644\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.230399 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-utilities\") pod \"61e5aac0-e74c-4259-878b-8ad4c3521644\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.230540 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nncq\" (UniqueName: \"kubernetes.io/projected/61e5aac0-e74c-4259-878b-8ad4c3521644-kube-api-access-5nncq\") pod \"61e5aac0-e74c-4259-878b-8ad4c3521644\" (UID: \"61e5aac0-e74c-4259-878b-8ad4c3521644\") " Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.231307 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-utilities" (OuterVolumeSpecName: "utilities") pod "61e5aac0-e74c-4259-878b-8ad4c3521644" (UID: "61e5aac0-e74c-4259-878b-8ad4c3521644"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.236748 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e5aac0-e74c-4259-878b-8ad4c3521644-kube-api-access-5nncq" (OuterVolumeSpecName: "kube-api-access-5nncq") pod "61e5aac0-e74c-4259-878b-8ad4c3521644" (UID: "61e5aac0-e74c-4259-878b-8ad4c3521644"). InnerVolumeSpecName "kube-api-access-5nncq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.257240 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61e5aac0-e74c-4259-878b-8ad4c3521644" (UID: "61e5aac0-e74c-4259-878b-8ad4c3521644"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.333471 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nncq\" (UniqueName: \"kubernetes.io/projected/61e5aac0-e74c-4259-878b-8ad4c3521644-kube-api-access-5nncq\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.333538 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.333553 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e5aac0-e74c-4259-878b-8ad4c3521644-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.562680 4772 generic.go:334] "Generic (PLEG): container finished" podID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerID="457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd" exitCode=0 Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.562791 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfmh" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.562870 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfmh" event={"ID":"61e5aac0-e74c-4259-878b-8ad4c3521644","Type":"ContainerDied","Data":"457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd"} Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.563019 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfmh" event={"ID":"61e5aac0-e74c-4259-878b-8ad4c3521644","Type":"ContainerDied","Data":"85a9fe97b6b707a5571a406f82f5bf3a4038cf38bfdb7f70bec3e21b4c361119"} Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.563073 4772 scope.go:117] "RemoveContainer" containerID="457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.592490 4772 scope.go:117] "RemoveContainer" containerID="2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.604641 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfmh"] Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.616679 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfmh"] Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.619123 4772 scope.go:117] "RemoveContainer" containerID="df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.630187 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.667337 4772 scope.go:117] "RemoveContainer" containerID="457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd" Nov 22 13:15:54 crc kubenswrapper[4772]: E1122 13:15:54.668248 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd\": container with ID starting with 457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd not found: ID does not exist" containerID="457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.668351 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd"} err="failed to get container status \"457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd\": rpc error: code = NotFound desc = could not find container \"457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd\": container with ID starting with 457de544157f48786042b2246ea7a6956ad01362353606bbc0068baaa9bfd9cd not found: ID does not exist" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.668406 4772 scope.go:117] "RemoveContainer" containerID="2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec" Nov 22 13:15:54 crc kubenswrapper[4772]: E1122 13:15:54.668985 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec\": container with ID starting with 2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec not found: ID does not exist" containerID="2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.669024 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec"} err="failed to get container status \"2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec\": rpc error: code = NotFound desc = could not find container \"2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec\": container with ID starting with 2280735ea26ed82cef004f5ec76046193f3af4d9d5dae6d5da686ade344b51ec not found: ID does not exist" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.669100 4772 scope.go:117] "RemoveContainer" containerID="df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b" Nov 22 13:15:54 crc kubenswrapper[4772]: E1122 13:15:54.669501 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b\": container with ID starting with df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b not found: ID does not exist" containerID="df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b" Nov 22 13:15:54 crc kubenswrapper[4772]: I1122 13:15:54.669524 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b"} err="failed to get container status \"df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b\": rpc error: code = NotFound desc = could not find container \"df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b\": container with ID starting with df218834d7b3867c11d0a3678e58631df95dd23dae82e60f76f08d8435f0a04b not found: ID does not exist" Nov 22 13:15:55 crc kubenswrapper[4772]: I1122 13:15:55.429862 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" path="/var/lib/kubelet/pods/61e5aac0-e74c-4259-878b-8ad4c3521644/volumes" Nov 22 13:15:55 crc kubenswrapper[4772]: I1122 13:15:55.856681 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fdlq"] Nov 22 13:15:57 crc kubenswrapper[4772]: I1122 13:15:57.602134 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5fdlq" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="registry-server" containerID="cri-o://9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29" gracePeriod=2 Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.291307 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.349641 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-utilities\") pod \"29995f88-0b42-452f-badd-a87db615eff6\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.350029 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h4g8\" (UniqueName: \"kubernetes.io/projected/29995f88-0b42-452f-badd-a87db615eff6-kube-api-access-4h4g8\") pod \"29995f88-0b42-452f-badd-a87db615eff6\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.350321 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-catalog-content\") pod \"29995f88-0b42-452f-badd-a87db615eff6\" (UID: \"29995f88-0b42-452f-badd-a87db615eff6\") " Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.351513 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-utilities" (OuterVolumeSpecName: "utilities") pod "29995f88-0b42-452f-badd-a87db615eff6" (UID: "29995f88-0b42-452f-badd-a87db615eff6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.357966 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29995f88-0b42-452f-badd-a87db615eff6-kube-api-access-4h4g8" (OuterVolumeSpecName: "kube-api-access-4h4g8") pod "29995f88-0b42-452f-badd-a87db615eff6" (UID: "29995f88-0b42-452f-badd-a87db615eff6"). InnerVolumeSpecName "kube-api-access-4h4g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.418210 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29995f88-0b42-452f-badd-a87db615eff6" (UID: "29995f88-0b42-452f-badd-a87db615eff6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.453140 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.453605 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h4g8\" (UniqueName: \"kubernetes.io/projected/29995f88-0b42-452f-badd-a87db615eff6-kube-api-access-4h4g8\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.453616 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29995f88-0b42-452f-badd-a87db615eff6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.620638 4772 generic.go:334] "Generic (PLEG): container finished" podID="29995f88-0b42-452f-badd-a87db615eff6" containerID="9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29" exitCode=0 Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.620714 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fdlq" event={"ID":"29995f88-0b42-452f-badd-a87db615eff6","Type":"ContainerDied","Data":"9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29"} Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.620790 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fdlq" event={"ID":"29995f88-0b42-452f-badd-a87db615eff6","Type":"ContainerDied","Data":"4e9f4cda6fa25b64d3d8df9c9aa9a22cc23798136eed02c6b707f86ecf7ed069"} Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.620822 4772 scope.go:117] "RemoveContainer" containerID="9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.621153 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fdlq" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.660479 4772 scope.go:117] "RemoveContainer" containerID="ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.680004 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fdlq"] Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.689084 4772 scope.go:117] "RemoveContainer" containerID="5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.691635 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5fdlq"] Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.741609 4772 scope.go:117] "RemoveContainer" containerID="9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29" Nov 22 13:15:58 crc kubenswrapper[4772]: E1122 13:15:58.742069 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29\": container with ID starting with 9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29 not found: ID does not exist" containerID="9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.742125 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29"} err="failed to get container status \"9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29\": rpc error: code = NotFound desc = could not find container \"9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29\": container with ID starting with 9ceee1712943cc419657869e875060b4ecb9cdf4c69fa50ef13dce46030d4d29 not found: ID does not exist" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.742160 4772 scope.go:117] "RemoveContainer" containerID="ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c" Nov 22 13:15:58 crc kubenswrapper[4772]: E1122 13:15:58.742570 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c\": container with ID starting with ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c not found: ID does not exist" containerID="ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.742647 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c"} err="failed to get container status \"ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c\": rpc error: code = NotFound desc = could not find container \"ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c\": container with ID starting with ee9ee65c22284e350e297b441e205fd754854a71cec714f6bc5da2741d1d354c not found: ID does not exist" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.742675 4772 scope.go:117] "RemoveContainer" containerID="5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5" Nov 22 13:15:58 crc kubenswrapper[4772]: E1122 13:15:58.743380 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5\": container with ID starting with 5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5 not found: ID does not exist" containerID="5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5" Nov 22 13:15:58 crc kubenswrapper[4772]: I1122 13:15:58.743419 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5"} err="failed to get container status \"5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5\": rpc error: code = NotFound desc = could not find container \"5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5\": container with ID starting with 5004a0f50e588ca26db60030074b8b61952500124ccb9f55828878ab48cb65d5 not found: ID does not exist" Nov 22 13:15:59 crc kubenswrapper[4772]: I1122 13:15:59.435521 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29995f88-0b42-452f-badd-a87db615eff6" path="/var/lib/kubelet/pods/29995f88-0b42-452f-badd-a87db615eff6/volumes" Nov 22 13:16:02 crc kubenswrapper[4772]: I1122 13:16:02.414221 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:16:02 crc kubenswrapper[4772]: E1122 13:16:02.415042 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:16:13 crc kubenswrapper[4772]: I1122 13:16:13.414149 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:16:13 crc kubenswrapper[4772]: E1122 13:16:13.414913 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:16:27 crc kubenswrapper[4772]: I1122 13:16:27.414794 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:16:27 crc kubenswrapper[4772]: E1122 13:16:27.415992 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:16:40 crc kubenswrapper[4772]: I1122 13:16:40.414351 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:16:40 crc kubenswrapper[4772]: E1122 13:16:40.415715 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:16:51 crc kubenswrapper[4772]: I1122 13:16:51.429408 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:16:51 crc kubenswrapper[4772]: E1122 13:16:51.431002 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:17:04 crc kubenswrapper[4772]: I1122 13:17:04.413778 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:17:04 crc kubenswrapper[4772]: E1122 13:17:04.414956 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:17:16 crc kubenswrapper[4772]: I1122 13:17:16.414623 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:17:16 crc kubenswrapper[4772]: E1122 13:17:16.415556 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:17:31 crc kubenswrapper[4772]: I1122 13:17:31.424235 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:17:31 crc kubenswrapper[4772]: E1122 13:17:31.425730 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:17:46 crc kubenswrapper[4772]: I1122 13:17:46.414240 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:17:47 crc kubenswrapper[4772]: I1122 13:17:47.000096 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"9751772cce3498087c1c979229fc021c29c99fe4322e140218c3d05d6ab30edd"} Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.722007 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2vntc"] Nov 22 13:18:36 crc kubenswrapper[4772]: E1122 13:18:36.723297 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="extract-utilities" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723326 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="extract-utilities" Nov 22 13:18:36 crc kubenswrapper[4772]: E1122 13:18:36.723342 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="extract-utilities" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723348 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="extract-utilities" Nov 22 13:18:36 crc kubenswrapper[4772]: E1122 13:18:36.723395 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="registry-server" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723401 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="registry-server" Nov 22 13:18:36 crc kubenswrapper[4772]: E1122 13:18:36.723412 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="registry-server" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723418 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="registry-server" Nov 22 13:18:36 crc kubenswrapper[4772]: E1122 13:18:36.723429 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="extract-content" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723435 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="extract-content" Nov 22 13:18:36 crc kubenswrapper[4772]: E1122 13:18:36.723455 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="extract-content" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723461 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="extract-content" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723666 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e5aac0-e74c-4259-878b-8ad4c3521644" containerName="registry-server" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.723689 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="29995f88-0b42-452f-badd-a87db615eff6" containerName="registry-server" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.725448 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.752235 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2vntc"] Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.848562 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-catalog-content\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.848614 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-utilities\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.848640 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq8zk\" (UniqueName: \"kubernetes.io/projected/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-kube-api-access-zq8zk\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.952769 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-catalog-content\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.952851 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-utilities\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.952911 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq8zk\" (UniqueName: \"kubernetes.io/projected/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-kube-api-access-zq8zk\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.953374 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-catalog-content\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:36 crc kubenswrapper[4772]: I1122 13:18:36.954239 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-utilities\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:37 crc kubenswrapper[4772]: I1122 13:18:37.349000 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq8zk\" (UniqueName: \"kubernetes.io/projected/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-kube-api-access-zq8zk\") pod \"community-operators-2vntc\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:37 crc kubenswrapper[4772]: I1122 13:18:37.649663 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:38 crc kubenswrapper[4772]: I1122 13:18:38.157418 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2vntc"] Nov 22 13:18:38 crc kubenswrapper[4772]: I1122 13:18:38.616686 4772 generic.go:334] "Generic (PLEG): container finished" podID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerID="e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f" exitCode=0 Nov 22 13:18:38 crc kubenswrapper[4772]: I1122 13:18:38.616749 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vntc" event={"ID":"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8","Type":"ContainerDied","Data":"e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f"} Nov 22 13:18:38 crc kubenswrapper[4772]: I1122 13:18:38.617096 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vntc" event={"ID":"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8","Type":"ContainerStarted","Data":"6455aac34af70eff211063923c1fee51a97a69f3d2d2037ccb735e935548f785"} Nov 22 13:18:38 crc kubenswrapper[4772]: I1122 13:18:38.618927 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 13:18:40 crc kubenswrapper[4772]: I1122 13:18:40.642910 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vntc" event={"ID":"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8","Type":"ContainerStarted","Data":"db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f"} Nov 22 13:18:42 crc kubenswrapper[4772]: I1122 13:18:42.666431 4772 generic.go:334] "Generic (PLEG): container finished" podID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerID="db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f" exitCode=0 Nov 22 13:18:42 crc kubenswrapper[4772]: I1122 13:18:42.666471 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vntc" event={"ID":"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8","Type":"ContainerDied","Data":"db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f"} Nov 22 13:18:43 crc kubenswrapper[4772]: I1122 13:18:43.680006 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vntc" event={"ID":"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8","Type":"ContainerStarted","Data":"6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf"} Nov 22 13:18:47 crc kubenswrapper[4772]: I1122 13:18:47.650365 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:47 crc kubenswrapper[4772]: I1122 13:18:47.651454 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:47 crc kubenswrapper[4772]: I1122 13:18:47.704065 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:47 crc kubenswrapper[4772]: I1122 13:18:47.728677 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2vntc" podStartSLOduration=7.269989329 podStartE2EDuration="11.728654509s" podCreationTimestamp="2025-11-22 13:18:36 +0000 UTC" firstStartedPulling="2025-11-22 13:18:38.618673444 +0000 UTC m=+9638.858117928" lastFinishedPulling="2025-11-22 13:18:43.077338614 +0000 UTC m=+9643.316783108" observedRunningTime="2025-11-22 13:18:43.699969696 +0000 UTC m=+9643.939414190" watchObservedRunningTime="2025-11-22 13:18:47.728654509 +0000 UTC m=+9647.968099003" Nov 22 13:18:48 crc kubenswrapper[4772]: I1122 13:18:48.793946 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:50 crc kubenswrapper[4772]: I1122 13:18:50.911541 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2vntc"] Nov 22 13:18:50 crc kubenswrapper[4772]: I1122 13:18:50.912495 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2vntc" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="registry-server" containerID="cri-o://6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf" gracePeriod=2 Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.470373 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.485060 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-catalog-content\") pod \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.485616 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-utilities\") pod \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.485702 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq8zk\" (UniqueName: \"kubernetes.io/projected/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-kube-api-access-zq8zk\") pod \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\" (UID: \"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8\") " Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.486800 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-utilities" (OuterVolumeSpecName: "utilities") pod "f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" (UID: "f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.502462 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-kube-api-access-zq8zk" (OuterVolumeSpecName: "kube-api-access-zq8zk") pod "f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" (UID: "f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8"). InnerVolumeSpecName "kube-api-access-zq8zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.548314 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" (UID: "f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.587692 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.587741 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq8zk\" (UniqueName: \"kubernetes.io/projected/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-kube-api-access-zq8zk\") on node \"crc\" DevicePath \"\"" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.587755 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.763242 4772 generic.go:334] "Generic (PLEG): container finished" podID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerID="6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf" exitCode=0 Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.763312 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vntc" event={"ID":"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8","Type":"ContainerDied","Data":"6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf"} Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.763325 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2vntc" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.763355 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2vntc" event={"ID":"f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8","Type":"ContainerDied","Data":"6455aac34af70eff211063923c1fee51a97a69f3d2d2037ccb735e935548f785"} Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.763386 4772 scope.go:117] "RemoveContainer" containerID="6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.790759 4772 scope.go:117] "RemoveContainer" containerID="db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.810939 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2vntc"] Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.815077 4772 scope.go:117] "RemoveContainer" containerID="e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.823306 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2vntc"] Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.879671 4772 scope.go:117] "RemoveContainer" containerID="6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf" Nov 22 13:18:51 crc kubenswrapper[4772]: E1122 13:18:51.880126 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf\": container with ID starting with 6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf not found: ID does not exist" containerID="6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.880168 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf"} err="failed to get container status \"6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf\": rpc error: code = NotFound desc = could not find container \"6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf\": container with ID starting with 6fc8f91f999bce3b1b2b36f51b5c58b5f1c2d8acfb5ec4020201d2bd7fdd8ccf not found: ID does not exist" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.880199 4772 scope.go:117] "RemoveContainer" containerID="db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f" Nov 22 13:18:51 crc kubenswrapper[4772]: E1122 13:18:51.880480 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f\": container with ID starting with db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f not found: ID does not exist" containerID="db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.880543 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f"} err="failed to get container status \"db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f\": rpc error: code = NotFound desc = could not find container \"db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f\": container with ID starting with db80f6463c0b83584e18618bb4081e4be9791516f3667d388c40a368030b059f not found: ID does not exist" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.880580 4772 scope.go:117] "RemoveContainer" containerID="e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f" Nov 22 13:18:51 crc kubenswrapper[4772]: E1122 13:18:51.880860 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f\": container with ID starting with e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f not found: ID does not exist" containerID="e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f" Nov 22 13:18:51 crc kubenswrapper[4772]: I1122 13:18:51.880890 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f"} err="failed to get container status \"e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f\": rpc error: code = NotFound desc = could not find container \"e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f\": container with ID starting with e1eb6ccd99690660f56c36b39b4a8dc671754927dceed2aa62d9bd4082b39c6f not found: ID does not exist" Nov 22 13:18:53 crc kubenswrapper[4772]: I1122 13:18:53.430211 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" path="/var/lib/kubelet/pods/f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8/volumes" Nov 22 13:20:01 crc kubenswrapper[4772]: I1122 13:20:01.533428 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:20:01 crc kubenswrapper[4772]: I1122 13:20:01.534262 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:20:31 crc kubenswrapper[4772]: I1122 13:20:31.533300 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:20:31 crc kubenswrapper[4772]: I1122 13:20:31.534302 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:21:01 crc kubenswrapper[4772]: I1122 13:21:01.533191 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:21:01 crc kubenswrapper[4772]: I1122 13:21:01.534257 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:21:01 crc kubenswrapper[4772]: I1122 13:21:01.534336 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 13:21:01 crc kubenswrapper[4772]: I1122 13:21:01.535772 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9751772cce3498087c1c979229fc021c29c99fe4322e140218c3d05d6ab30edd"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 13:21:01 crc kubenswrapper[4772]: I1122 13:21:01.535867 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://9751772cce3498087c1c979229fc021c29c99fe4322e140218c3d05d6ab30edd" gracePeriod=600 Nov 22 13:21:02 crc kubenswrapper[4772]: I1122 13:21:02.368232 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="9751772cce3498087c1c979229fc021c29c99fe4322e140218c3d05d6ab30edd" exitCode=0 Nov 22 13:21:02 crc kubenswrapper[4772]: I1122 13:21:02.368284 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"9751772cce3498087c1c979229fc021c29c99fe4322e140218c3d05d6ab30edd"} Nov 22 13:21:02 crc kubenswrapper[4772]: I1122 13:21:02.368851 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6"} Nov 22 13:21:02 crc kubenswrapper[4772]: I1122 13:21:02.368902 4772 scope.go:117] "RemoveContainer" containerID="0f0ceb8d50c48253b7b22a7a8a271db084d9899785bb62159676dccb6db6f1ba" Nov 22 13:22:37 crc kubenswrapper[4772]: I1122 13:22:37.379558 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ee08a124-951b-4447-ad8a-7178df62bc7c/init-config-reloader/0.log" Nov 22 13:22:37 crc kubenswrapper[4772]: I1122 13:22:37.533504 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ee08a124-951b-4447-ad8a-7178df62bc7c/init-config-reloader/0.log" Nov 22 13:22:37 crc kubenswrapper[4772]: I1122 13:22:37.629744 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ee08a124-951b-4447-ad8a-7178df62bc7c/alertmanager/0.log" Nov 22 13:22:37 crc kubenswrapper[4772]: I1122 13:22:37.691725 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ee08a124-951b-4447-ad8a-7178df62bc7c/config-reloader/0.log" Nov 22 13:22:37 crc kubenswrapper[4772]: I1122 13:22:37.862241 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_baab6010-9072-4e34-8a19-afd2bf22ebce/aodh-api/0.log" Nov 22 13:22:37 crc kubenswrapper[4772]: I1122 13:22:37.900776 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_baab6010-9072-4e34-8a19-afd2bf22ebce/aodh-evaluator/0.log" Nov 22 13:22:37 crc kubenswrapper[4772]: I1122 13:22:37.962437 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_baab6010-9072-4e34-8a19-afd2bf22ebce/aodh-listener/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.034122 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_baab6010-9072-4e34-8a19-afd2bf22ebce/aodh-notifier/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.168335 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58fcc7f846-vt29r_5d30a1bc-779c-4c29-aa6c-cc69243f7a32/barbican-api/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.222617 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58fcc7f846-vt29r_5d30a1bc-779c-4c29-aa6c-cc69243f7a32/barbican-api-log/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.465477 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78cbb88bb-tbns2_a5b466e0-1ece-43ad-8898-7d98ebc952e4/barbican-keystone-listener/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.503528 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78cbb88bb-tbns2_a5b466e0-1ece-43ad-8898-7d98ebc952e4/barbican-keystone-listener-log/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.602668 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59dc6657f-22jc6_edbde106-63a1-4a4b-91af-bb723902586e/barbican-worker/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.661991 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59dc6657f-22jc6_edbde106-63a1-4a4b-91af-bb723902586e/barbican-worker-log/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.718420 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-pqq5v_1933b73e-e00d-490d-b8d0-26eab9d3b9a8/bootstrap-openstack-openstack-cell1/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.918488 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2fc535b2-2f64-4971-b836-12565953a22f/ceilometer-central-agent/0.log" Nov 22 13:22:38 crc kubenswrapper[4772]: I1122 13:22:38.953030 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2fc535b2-2f64-4971-b836-12565953a22f/ceilometer-notification-agent/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.013778 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2fc535b2-2f64-4971-b836-12565953a22f/proxy-httpd/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.165567 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2fc535b2-2f64-4971-b836-12565953a22f/sg-core/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.226079 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-c2qgd_6938b21e-c1bf-418d-a88e-f39b7a771257/ceph-client-openstack-openstack-cell1/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.451648 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ad882bee-d513-428f-b177-0a6412268f7a/cinder-api/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.512500 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ad882bee-d513-428f-b177-0a6412268f7a/cinder-api-log/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.737543 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_784dd71b-f2ce-4ba9-9e18-b5af04ecb90b/probe/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.746342 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_784dd71b-f2ce-4ba9-9e18-b5af04ecb90b/cinder-backup/0.log" Nov 22 13:22:39 crc kubenswrapper[4772]: I1122 13:22:39.823933 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_024e30d7-cf0f-4a2d-a815-ecc22c0a9769/cinder-scheduler/0.log" Nov 22 13:22:40 crc kubenswrapper[4772]: I1122 13:22:40.013029 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_024e30d7-cf0f-4a2d-a815-ecc22c0a9769/probe/0.log" Nov 22 13:22:40 crc kubenswrapper[4772]: I1122 13:22:40.177635 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f05efc86-3c87-4c7c-8a65-03152b33376c/cinder-volume/0.log" Nov 22 13:22:40 crc kubenswrapper[4772]: I1122 13:22:40.198905 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f05efc86-3c87-4c7c-8a65-03152b33376c/probe/0.log" Nov 22 13:22:40 crc kubenswrapper[4772]: I1122 13:22:40.294959 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-swxh9_bc459ceb-9b5a-42e9-a52d-68970c756a8e/configure-network-openstack-openstack-cell1/0.log" Nov 22 13:22:40 crc kubenswrapper[4772]: I1122 13:22:40.414795 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-t29j2_8470ca02-9bf5-4f87-80ea-55c09de031e8/configure-os-openstack-openstack-cell1/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.035836 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-57644db565-l5cdf_ff5945d2-041b-499f-8d6f-74dcc4d34fdb/init/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.197176 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-57644db565-l5cdf_ff5945d2-041b-499f-8d6f-74dcc4d34fdb/init/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.277509 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-57644db565-l5cdf_ff5945d2-041b-499f-8d6f-74dcc4d34fdb/dnsmasq-dns/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.300609 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-b9cpv_bdd3bebe-c9c8-4f42-8e6d-7f85806cdde9/download-cache-openstack-openstack-cell1/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.476120 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a02af0f4-1267-436a-9c11-e7ef2444906c/glance-httpd/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.500736 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a02af0f4-1267-436a-9c11-e7ef2444906c/glance-log/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.524420 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fd10b3de-dc99-4463-ae4b-30ea9aaa642e/glance-httpd/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.628426 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fd10b3de-dc99-4463-ae4b-30ea9aaa642e/glance-log/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.870262 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5b6f944f87-bdzd6_e7285a22-1d2c-48d8-87cd-fb7e47904824/heat-api/0.log" Nov 22 13:22:41 crc kubenswrapper[4772]: I1122 13:22:41.947217 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6f5dddbfd-ssd79_6be9573b-55de-4e3c-892e-246b9d85269e/heat-cfnapi/0.log" Nov 22 13:22:42 crc kubenswrapper[4772]: I1122 13:22:42.039957 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6958597d57-z4ml8_c2bdccd8-3230-4ade-828e-baed9abe01de/heat-engine/0.log" Nov 22 13:22:42 crc kubenswrapper[4772]: I1122 13:22:42.143392 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5499597ffc-9lp9r_35e8a6fd-fc93-4f4f-b4b4-849665217dc3/horizon/0.log" Nov 22 13:22:42 crc kubenswrapper[4772]: I1122 13:22:42.249989 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5499597ffc-9lp9r_35e8a6fd-fc93-4f4f-b4b4-849665217dc3/horizon-log/0.log" Nov 22 13:22:42 crc kubenswrapper[4772]: I1122 13:22:42.340839 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-5zpzf_16477c52-8296-4d66-ad5f-78826cc5bab7/install-certs-openstack-openstack-cell1/0.log" Nov 22 13:22:42 crc kubenswrapper[4772]: I1122 13:22:42.385069 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-p42sl_5a543b1b-d97d-48fa-bc61-3ba67778aa3a/install-os-openstack-openstack-cell1/0.log" Nov 22 13:22:42 crc kubenswrapper[4772]: I1122 13:22:42.613821 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-856c54c998-qr6df_abe96e9e-94c7-43ba-b37d-bec6a589c004/keystone-api/0.log" Nov 22 13:22:42 crc kubenswrapper[4772]: I1122 13:22:42.645683 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29396941-tw2c7_7e7dfe28-05c8-4e1a-b1f3-73589d504864/keystone-cron/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.266780 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-vf8xp_aebecc6e-2ba7-423a-b983-f4698c836a86/libvirt-openstack-openstack-cell1/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.338941 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bb9d0b8b-8de9-4563-98a5-0f494973330e/kube-state-metrics/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.458263 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_e98833bc-72ed-444d-a7f3-e1c886658154/manila-api/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.551387 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_e98833bc-72ed-444d-a7f3-e1c886658154/manila-api-log/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.651628 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_3274781d-cba6-440d-a492-5d9e7cfdb23a/manila-scheduler/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.679797 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_3274781d-cba6-440d-a492-5d9e7cfdb23a/probe/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.788673 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1a69df5d-3116-4547-b106-d11e052b96d9/manila-share/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.810663 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1a69df5d-3116-4547-b106-d11e052b96d9/probe/0.log" Nov 22 13:22:43 crc kubenswrapper[4772]: I1122 13:22:43.919471 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_79ada2a2-307a-4c67-bd8d-c2e3b351e127/adoption/0.log" Nov 22 13:22:45 crc kubenswrapper[4772]: I1122 13:22:45.194637 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d6dc9d555-trfk6_48903cd6-6a18-4b98-979a-cab6160c1a98/neutron-httpd/0.log" Nov 22 13:22:45 crc kubenswrapper[4772]: I1122 13:22:45.276563 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d6dc9d555-trfk6_48903cd6-6a18-4b98-979a-cab6160c1a98/neutron-api/0.log" Nov 22 13:22:45 crc kubenswrapper[4772]: I1122 13:22:45.459264 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-jwgtt_de658c90-11c4-4861-b1df-5dfb7da0bdf0/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 22 13:22:45 crc kubenswrapper[4772]: I1122 13:22:45.600371 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-n6cc8_7100bac9-b56a-4d24-9a30-528db4074857/neutron-metadata-openstack-openstack-cell1/0.log" Nov 22 13:22:45 crc kubenswrapper[4772]: I1122 13:22:45.835681 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-kmwfl_5f0aff0b-9692-4d9c-a0be-12694f7e71f8/neutron-sriov-openstack-openstack-cell1/0.log" Nov 22 13:22:45 crc kubenswrapper[4772]: I1122 13:22:45.950001 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_29bf3574-4bfe-4b57-90a0-4b76860bfc1c/nova-api-api/0.log" Nov 22 13:22:46 crc kubenswrapper[4772]: I1122 13:22:46.040408 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_29bf3574-4bfe-4b57-90a0-4b76860bfc1c/nova-api-log/0.log" Nov 22 13:22:46 crc kubenswrapper[4772]: I1122 13:22:46.209105 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_47aa8385-dcce-4adf-b113-79460b95e145/nova-cell0-conductor-conductor/0.log" Nov 22 13:22:46 crc kubenswrapper[4772]: I1122 13:22:46.311812 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_86c2a53a-f52b-49b3-9bc3-105cf5918b7f/nova-cell1-conductor-conductor/0.log" Nov 22 13:22:46 crc kubenswrapper[4772]: I1122 13:22:46.552607 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b0338152-6fc6-4c47-9f8f-49239851a5d4/nova-cell1-novncproxy-novncproxy/0.log" Nov 22 13:22:46 crc kubenswrapper[4772]: I1122 13:22:46.718137 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell69pxs_819a5030-8de8-4772-86bf-9d6fb6f8de4e/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 22 13:22:46 crc kubenswrapper[4772]: I1122 13:22:46.881872 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-rfh4k_4ab47679-8af5-427d-bac6-71a380ae0130/nova-cell1-openstack-openstack-cell1/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.041900 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a715a14b-bf58-4263-8b67-15d6e90adc77/nova-metadata-log/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.129303 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a715a14b-bf58-4263-8b67-15d6e90adc77/nova-metadata-metadata/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.258688 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3366e4a4-1c0a-4314-805b-72e70cd70289/nova-scheduler-scheduler/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.405651 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-67f985d648-m9x49_64c0135c-1c06-4137-b911-951fc37de7e1/init/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.628966 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-67f985d648-m9x49_64c0135c-1c06-4137-b911-951fc37de7e1/init/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.689466 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-67f985d648-m9x49_64c0135c-1c06-4137-b911-951fc37de7e1/octavia-api-provider-agent/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.871955 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-gdsq4_7a997166-9f14-4bb7-a94a-427e72fb64d2/init/0.log" Nov 22 13:22:47 crc kubenswrapper[4772]: I1122 13:22:47.928023 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-67f985d648-m9x49_64c0135c-1c06-4137-b911-951fc37de7e1/octavia-api/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.009619 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-gdsq4_7a997166-9f14-4bb7-a94a-427e72fb64d2/init/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.157490 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-gdsq4_7a997166-9f14-4bb7-a94a-427e72fb64d2/octavia-healthmanager/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.178574 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-mlsks_c03eae43-9823-42ce-b3f3-2f437e08fd71/init/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.400415 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-mlsks_c03eae43-9823-42ce-b3f3-2f437e08fd71/init/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.429628 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-mlsks_c03eae43-9823-42ce-b3f3-2f437e08fd71/octavia-housekeeping/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.556406 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-tc6tq_ccbe9325-5948-44bc-9bd6-55892b81e85e/init/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.666236 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-tc6tq_ccbe9325-5948-44bc-9bd6-55892b81e85e/octavia-amphora-httpd/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.762026 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-tc6tq_ccbe9325-5948-44bc-9bd6-55892b81e85e/init/0.log" Nov 22 13:22:48 crc kubenswrapper[4772]: I1122 13:22:48.871041 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-2zvhj_623bfb17-7669-4a36-bdf9-baae1d6afbf9/init/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.080765 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-2zvhj_623bfb17-7669-4a36-bdf9-baae1d6afbf9/init/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.116577 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-2zvhj_623bfb17-7669-4a36-bdf9-baae1d6afbf9/octavia-rsyslog/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.197288 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-7w6cx_281d85f9-b24f-4264-a5a1-6cf0f9d24f18/init/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.340019 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-7w6cx_281d85f9-b24f-4264-a5a1-6cf0f9d24f18/init/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.480784 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-7w6cx_281d85f9-b24f-4264-a5a1-6cf0f9d24f18/octavia-worker/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.530064 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c115c1ee-d75c-4d15-9c61-e3a17dec5c3a/mysql-bootstrap/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.709692 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c115c1ee-d75c-4d15-9c61-e3a17dec5c3a/mysql-bootstrap/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.747246 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c115c1ee-d75c-4d15-9c61-e3a17dec5c3a/galera/0.log" Nov 22 13:22:49 crc kubenswrapper[4772]: I1122 13:22:49.842578 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f/mysql-bootstrap/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.015221 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f/mysql-bootstrap/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.105501 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9011bc5e-6c7d-4bc3-a426-1f0b0305bf2f/galera/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.118488 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6a859593-08fa-42bc-9d33-defd6ad05df9/openstackclient/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.308182 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6gnhd_4a456388-57e0-45cc-a2a6-80ec40f01215/openstack-network-exporter/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.431391 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qbssf_4221f6ce-957c-4881-8a78-a466cddc53e3/ovsdb-server-init/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.699784 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qbssf_4221f6ce-957c-4881-8a78-a466cddc53e3/ovsdb-server-init/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.723406 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qbssf_4221f6ce-957c-4881-8a78-a466cddc53e3/ovs-vswitchd/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.752946 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qbssf_4221f6ce-957c-4881-8a78-a466cddc53e3/ovsdb-server/0.log" Nov 22 13:22:50 crc kubenswrapper[4772]: I1122 13:22:50.906234 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s8m74_54838e90-0c76-4bd7-b959-83229a23745d/ovn-controller/0.log" Nov 22 13:22:51 crc kubenswrapper[4772]: I1122 13:22:51.467022 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_345dc3d0-626f-4aba-a03f-57583bea5a5e/adoption/0.log" Nov 22 13:22:51 crc kubenswrapper[4772]: I1122 13:22:51.507961 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_312badf7-77eb-4792-9139-8f06dec2e2ff/openstack-network-exporter/0.log" Nov 22 13:22:51 crc kubenswrapper[4772]: I1122 13:22:51.775837 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_312badf7-77eb-4792-9139-8f06dec2e2ff/ovn-northd/0.log" Nov 22 13:22:51 crc kubenswrapper[4772]: I1122 13:22:51.931679 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-9xmlp_59226403-65f4-4187-96bc-5a0fe5da070c/ovn-openstack-openstack-cell1/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.005338 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_458e152d-811c-44c7-913f-054116c0523d/openstack-network-exporter/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.119606 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_458e152d-811c-44c7-913f-054116c0523d/ovsdbserver-nb/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.281038 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_06ce39ce-1767-417d-92bc-77f78e1df6a8/openstack-network-exporter/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.314858 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_06ce39ce-1767-417d-92bc-77f78e1df6a8/ovsdbserver-nb/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.543809 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_0b2e69ae-0817-4c5e-9b7d-93196d354047/ovsdbserver-nb/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.573197 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_0b2e69ae-0817-4c5e-9b7d-93196d354047/openstack-network-exporter/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.666838 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_efafb1c9-8e1e-41ae-8d18-04588d896fc4/openstack-network-exporter/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.770054 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_efafb1c9-8e1e-41ae-8d18-04588d896fc4/ovsdbserver-sb/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.889873 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_82717dcc-f9bd-40e8-8125-710e5a3f9374/openstack-network-exporter/0.log" Nov 22 13:22:52 crc kubenswrapper[4772]: I1122 13:22:52.921277 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_82717dcc-f9bd-40e8-8125-710e5a3f9374/ovsdbserver-sb/0.log" Nov 22 13:22:53 crc kubenswrapper[4772]: I1122 13:22:53.663018 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_513b4b3b-e546-414e-ac74-f7730e8db4d1/ovsdbserver-sb/0.log" Nov 22 13:22:53 crc kubenswrapper[4772]: I1122 13:22:53.704395 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_513b4b3b-e546-414e-ac74-f7730e8db4d1/openstack-network-exporter/0.log" Nov 22 13:22:53 crc kubenswrapper[4772]: I1122 13:22:53.773879 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5858b9d67b-vdgcb_43446080-f192-4ecc-8261-3338c8da8d7c/placement-api/0.log" Nov 22 13:22:53 crc kubenswrapper[4772]: I1122 13:22:53.959659 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cclw5p_d5d839d3-cc8e-4c44-9739-f9f76e9ed1c9/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.066966 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5858b9d67b-vdgcb_43446080-f192-4ecc-8261-3338c8da8d7c/placement-log/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.213455 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_248a6987-edb6-4837-9d52-ee1144ad1996/init-config-reloader/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.398589 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_248a6987-edb6-4837-9d52-ee1144ad1996/init-config-reloader/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.425270 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_248a6987-edb6-4837-9d52-ee1144ad1996/prometheus/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.455702 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_248a6987-edb6-4837-9d52-ee1144ad1996/config-reloader/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.500112 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_248a6987-edb6-4837-9d52-ee1144ad1996/thanos-sidecar/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.698472 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fa9a9bc0-2f06-4490-bd06-eae00af9c7d0/setup-container/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.855656 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fa9a9bc0-2f06-4490-bd06-eae00af9c7d0/setup-container/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.949329 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fa9a9bc0-2f06-4490-bd06-eae00af9c7d0/rabbitmq/0.log" Nov 22 13:22:54 crc kubenswrapper[4772]: I1122 13:22:54.982163 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7f5b0814-3abb-4b82-a919-305b358a05d0/setup-container/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.152885 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7f5b0814-3abb-4b82-a919-305b358a05d0/setup-container/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.238970 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7f5b0814-3abb-4b82-a919-305b358a05d0/rabbitmq/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.382457 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-wnpwf_49f54305-538d-4280-a76c-1590815fb686/reboot-os-openstack-openstack-cell1/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.480438 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_dbe0ec0d-a61b-4782-ad96-815ff03ed7de/memcached/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.483657 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-ss45z_11129a3e-cef5-417b-9b7d-1542708ac3bb/run-os-openstack-openstack-cell1/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.623926 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-s46xs_9d9e9bcb-f195-4aa9-86d4-531cd424d3b6/ssh-known-hosts-openstack/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.801866 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-hbpc7_e9a56275-d25a-4d0b-9d8c-20c46f7b200e/telemetry-openstack-openstack-cell1/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.885620 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-zlksf_28d046d2-0d1c-4187-8d76-14d0004ec8e2/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 22 13:22:55 crc kubenswrapper[4772]: I1122 13:22:55.966956 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-qkcpc_2cfd11cf-ca9f-44fe-90c4-372f6d436285/validate-network-openstack-openstack-cell1/0.log" Nov 22 13:23:01 crc kubenswrapper[4772]: I1122 13:23:01.533208 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:23:01 crc kubenswrapper[4772]: I1122 13:23:01.533795 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:23:18 crc kubenswrapper[4772]: I1122 13:23:18.453230 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4_66f7c9f6-e1b5-4e01-956f-568be1f1fec3/util/0.log" Nov 22 13:23:18 crc kubenswrapper[4772]: I1122 13:23:18.670724 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4_66f7c9f6-e1b5-4e01-956f-568be1f1fec3/util/0.log" Nov 22 13:23:18 crc kubenswrapper[4772]: I1122 13:23:18.677521 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4_66f7c9f6-e1b5-4e01-956f-568be1f1fec3/pull/0.log" Nov 22 13:23:18 crc kubenswrapper[4772]: I1122 13:23:18.688579 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4_66f7c9f6-e1b5-4e01-956f-568be1f1fec3/pull/0.log" Nov 22 13:23:18 crc kubenswrapper[4772]: I1122 13:23:18.884274 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4_66f7c9f6-e1b5-4e01-956f-568be1f1fec3/pull/0.log" Nov 22 13:23:18 crc kubenswrapper[4772]: I1122 13:23:18.885600 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4_66f7c9f6-e1b5-4e01-956f-568be1f1fec3/util/0.log" Nov 22 13:23:18 crc kubenswrapper[4772]: I1122 13:23:18.909803 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1bf3d28a711035aae8e0af644764edd86da0d97631b5988225039dced6r58v4_66f7c9f6-e1b5-4e01-956f-568be1f1fec3/extract/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.064158 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-bjmld_19381059-85ea-461e-baca-f0f511fdb677/kube-rbac-proxy/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.155221 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-kx6h5_8085093b-cd6b-4ef5-9935-82eb224499c2/kube-rbac-proxy/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.212250 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-bjmld_19381059-85ea-461e-baca-f0f511fdb677/manager/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.337984 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-kx6h5_8085093b-cd6b-4ef5-9935-82eb224499c2/manager/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.373874 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-9bnwv_89c6d87d-3d72-43b9-a56e-bb94322cc856/kube-rbac-proxy/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.404904 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-9bnwv_89c6d87d-3d72-43b9-a56e-bb94322cc856/manager/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.601502 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6f95d84fd6-bwjrb_b90351ff-d9b5-4d42-b7ef-915a5bd4251d/kube-rbac-proxy/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.699015 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6f95d84fd6-bwjrb_b90351ff-d9b5-4d42-b7ef-915a5bd4251d/manager/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.864741 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-x46c7_2abf13f1-7bd8-4ea6-85a3-5c5658de7f48/manager/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.877758 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-x46c7_2abf13f1-7bd8-4ea6-85a3-5c5658de7f48/kube-rbac-proxy/0.log" Nov 22 13:23:19 crc kubenswrapper[4772]: I1122 13:23:19.942746 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-xk9c8_ede58bb8-1f1d-4637-a89a-5075266ea932/kube-rbac-proxy/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.124858 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-xk9c8_ede58bb8-1f1d-4637-a89a-5075266ea932/manager/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.170316 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6c55d8d69b-w8q6b_7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9/kube-rbac-proxy/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.404718 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-nhqm7_66c75ee1-78ca-448b-a2f0-0946014f82ff/kube-rbac-proxy/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.476464 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6c55d8d69b-w8q6b_7d08d7e3-35a3-4dd2-b1c5-50980b9c20a9/manager/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.493061 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-nhqm7_66c75ee1-78ca-448b-a2f0-0946014f82ff/manager/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.745491 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-79cc9d59f5-rvc7z_8044f4e1-088e-4e18-a9c4-a35265e4b62a/kube-rbac-proxy/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.786589 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-79cc9d59f5-rvc7z_8044f4e1-088e-4e18-a9c4-a35265e4b62a/manager/0.log" Nov 22 13:23:20 crc kubenswrapper[4772]: I1122 13:23:20.968442 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-jpzwd_bc5e47a3-441e-4076-b726-99bb8cd36d95/kube-rbac-proxy/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.005444 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-646fd589f9-jpzwd_bc5e47a3-441e-4076-b726-99bb8cd36d95/manager/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.029818 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-8772n_09182dc6-4ee5-4ad8-9298-f13a7037ac9b/kube-rbac-proxy/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.229704 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-8772n_09182dc6-4ee5-4ad8-9298-f13a7037ac9b/manager/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.256861 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-wwt28_58546700-44a8-45ad-bbbc-1ee40a090fd7/kube-rbac-proxy/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.310121 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-wwt28_58546700-44a8-45ad-bbbc-1ee40a090fd7/manager/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.498834 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-cgk6m_1d228e15-a43a-4b2a-b8a5-958a6ce484a7/kube-rbac-proxy/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.712452 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-v8z2p_7f92807d-a811-4354-a9b3-4efe75db8096/kube-rbac-proxy/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.750526 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-cgk6m_1d228e15-a43a-4b2a-b8a5-958a6ce484a7/manager/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.789016 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-v8z2p_7f92807d-a811-4354-a9b3-4efe75db8096/manager/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.959205 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-nn7dj_ed653c9f-8aac-4989-bc46-169893057f90/kube-rbac-proxy/0.log" Nov 22 13:23:21 crc kubenswrapper[4772]: I1122 13:23:21.966355 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-nn7dj_ed653c9f-8aac-4989-bc46-169893057f90/manager/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.063380 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f4bc68b84-vxcp7_2ec76c17-6475-4349-8aaa-47c8b6caa08e/kube-rbac-proxy/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.262134 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6d45d44995-lh69p_f091bcc0-d8a3-4795-b81e-21ec2a91958e/kube-rbac-proxy/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.461180 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9vtrh_0f9e1f7d-d872-45c9-92dc-5617ee96ed08/registry-server/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.513472 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6d45d44995-lh69p_f091bcc0-d8a3-4795-b81e-21ec2a91958e/operator/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.582940 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-7r9qs_db103267-50bd-4819-a33e-90a787ddb249/kube-rbac-proxy/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.782148 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-jxm8x_1037af09-c926-409c-9732-26cf293cc210/kube-rbac-proxy/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.823703 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-7r9qs_db103267-50bd-4819-a33e-90a787ddb249/manager/0.log" Nov 22 13:23:22 crc kubenswrapper[4772]: I1122 13:23:22.862056 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-jxm8x_1037af09-c926-409c-9732-26cf293cc210/manager/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.016060 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-2s8qw_7823e2ab-1a7a-4a3f-9749-04c705f4336e/operator/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.140995 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-8f6687c44-zv2pz_0e43cbd9-c19e-4747-b833-39529cfa3d9d/kube-rbac-proxy/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.289327 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-8f6687c44-zv2pz_0e43cbd9-c19e-4747-b833-39529cfa3d9d/manager/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.292976 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-695797c565-wr5mj_8456f23d-dc5a-4ddf-b853-46fcc56593e8/kube-rbac-proxy/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.579493 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-2d4vx_bc363239-f347-4379-8fb3-e499b555a263/kube-rbac-proxy/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.631556 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-77db6bf9c-2d4vx_bc363239-f347-4379-8fb3-e499b555a263/manager/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.655927 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-695797c565-wr5mj_8456f23d-dc5a-4ddf-b853-46fcc56593e8/manager/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.834666 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-wblft_e7a30218-add6-4170-948c-b6b9f8b960c8/kube-rbac-proxy/0.log" Nov 22 13:23:23 crc kubenswrapper[4772]: I1122 13:23:23.899541 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b56b8849f-wblft_e7a30218-add6-4170-948c-b6b9f8b960c8/manager/0.log" Nov 22 13:23:24 crc kubenswrapper[4772]: I1122 13:23:24.276465 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f4bc68b84-vxcp7_2ec76c17-6475-4349-8aaa-47c8b6caa08e/manager/0.log" Nov 22 13:23:31 crc kubenswrapper[4772]: I1122 13:23:31.533438 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:23:31 crc kubenswrapper[4772]: I1122 13:23:31.533960 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:23:42 crc kubenswrapper[4772]: I1122 13:23:42.460428 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-g7h5s_a26a8d34-6b49-4019-b262-7f8e6fddc433/control-plane-machine-set-operator/0.log" Nov 22 13:23:42 crc kubenswrapper[4772]: I1122 13:23:42.576394 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sgkwc_8c2915a8-d452-4234-94a7-f1ec68c95e4a/kube-rbac-proxy/0.log" Nov 22 13:23:42 crc kubenswrapper[4772]: I1122 13:23:42.651290 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sgkwc_8c2915a8-d452-4234-94a7-f1ec68c95e4a/machine-api-operator/0.log" Nov 22 13:23:56 crc kubenswrapper[4772]: I1122 13:23:56.044678 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-nttkb_6548e9e4-cb28-45c6-ba13-67d5abd67149/cert-manager-controller/0.log" Nov 22 13:23:56 crc kubenswrapper[4772]: I1122 13:23:56.229558 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-h2qvt_9b18c846-e06c-4518-af8b-3e279bae816c/cert-manager-cainjector/0.log" Nov 22 13:23:56 crc kubenswrapper[4772]: I1122 13:23:56.297846 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-dzcx9_feafcc29-726d-4953-ba51-85da91d3cc77/cert-manager-webhook/0.log" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.458402 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r2zdc"] Nov 22 13:24:00 crc kubenswrapper[4772]: E1122 13:24:00.459935 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="extract-utilities" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.459955 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="extract-utilities" Nov 22 13:24:00 crc kubenswrapper[4772]: E1122 13:24:00.460002 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="extract-content" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.460009 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="extract-content" Nov 22 13:24:00 crc kubenswrapper[4772]: E1122 13:24:00.460054 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="registry-server" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.460106 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="registry-server" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.460350 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a3e8b6-2537-4b2b-9f12-d864eaf1a2b8" containerName="registry-server" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.462522 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.484380 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2zdc"] Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.556288 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-utilities\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.556375 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rcd\" (UniqueName: \"kubernetes.io/projected/5ce8719d-ba6d-429c-981b-5804c4f01199-kube-api-access-58rcd\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.556425 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-catalog-content\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.657953 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-utilities\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.658004 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58rcd\" (UniqueName: \"kubernetes.io/projected/5ce8719d-ba6d-429c-981b-5804c4f01199-kube-api-access-58rcd\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.658044 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-catalog-content\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.658493 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-utilities\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.658536 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-catalog-content\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.680621 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58rcd\" (UniqueName: \"kubernetes.io/projected/5ce8719d-ba6d-429c-981b-5804c4f01199-kube-api-access-58rcd\") pod \"redhat-operators-r2zdc\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:00 crc kubenswrapper[4772]: I1122 13:24:00.783676 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.338775 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2zdc"] Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.533226 4772 patch_prober.go:28] interesting pod/machine-config-daemon-wwshd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.533487 4772 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.533531 4772 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.534317 4772 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6"} pod="openshift-machine-config-operator/machine-config-daemon-wwshd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.534374 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" containerName="machine-config-daemon" containerID="cri-o://985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" gracePeriod=600 Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.613657 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerStarted","Data":"6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f"} Nov 22 13:24:01 crc kubenswrapper[4772]: I1122 13:24:01.613704 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerStarted","Data":"3e4a77e0ab682004abfa4f368a6a23f2f16b1d8c24740f4da1b58fc2a2841884"} Nov 22 13:24:01 crc kubenswrapper[4772]: E1122 13:24:01.667277 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:24:02 crc kubenswrapper[4772]: I1122 13:24:02.624437 4772 generic.go:334] "Generic (PLEG): container finished" podID="2386c238-461f-4956-940f-ac3c26eb052e" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" exitCode=0 Nov 22 13:24:02 crc kubenswrapper[4772]: I1122 13:24:02.624479 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerDied","Data":"985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6"} Nov 22 13:24:02 crc kubenswrapper[4772]: I1122 13:24:02.624890 4772 scope.go:117] "RemoveContainer" containerID="9751772cce3498087c1c979229fc021c29c99fe4322e140218c3d05d6ab30edd" Nov 22 13:24:02 crc kubenswrapper[4772]: I1122 13:24:02.625665 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:24:02 crc kubenswrapper[4772]: E1122 13:24:02.625960 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:24:02 crc kubenswrapper[4772]: I1122 13:24:02.629483 4772 generic.go:334] "Generic (PLEG): container finished" podID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerID="6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f" exitCode=0 Nov 22 13:24:02 crc kubenswrapper[4772]: I1122 13:24:02.629533 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerDied","Data":"6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f"} Nov 22 13:24:02 crc kubenswrapper[4772]: I1122 13:24:02.631970 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 13:24:03 crc kubenswrapper[4772]: I1122 13:24:03.641258 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerStarted","Data":"85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428"} Nov 22 13:24:08 crc kubenswrapper[4772]: I1122 13:24:08.810929 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-r7k6f_afb51de5-31d1-4cb9-8590-ab7a8a74360a/nmstate-console-plugin/0.log" Nov 22 13:24:09 crc kubenswrapper[4772]: I1122 13:24:09.124878 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9lvgv_5a949a00-9ae8-4c80-a23b-e4627333061a/nmstate-handler/0.log" Nov 22 13:24:09 crc kubenswrapper[4772]: I1122 13:24:09.159059 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-mdxp4_f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b/kube-rbac-proxy/0.log" Nov 22 13:24:09 crc kubenswrapper[4772]: I1122 13:24:09.253815 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-mdxp4_f7639cc0-dabf-4a96-90bc-dc4e3eda5f5b/nmstate-metrics/0.log" Nov 22 13:24:09 crc kubenswrapper[4772]: I1122 13:24:09.369146 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-596nz_5e1f7834-0fb0-4790-836b-5bb5ca61bec2/nmstate-operator/0.log" Nov 22 13:24:09 crc kubenswrapper[4772]: I1122 13:24:09.508627 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-5lsvv_b41327fc-c766-4258-8252-da74057be64a/nmstate-webhook/0.log" Nov 22 13:24:09 crc kubenswrapper[4772]: I1122 13:24:09.704831 4772 generic.go:334] "Generic (PLEG): container finished" podID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerID="85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428" exitCode=0 Nov 22 13:24:09 crc kubenswrapper[4772]: I1122 13:24:09.704878 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerDied","Data":"85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428"} Nov 22 13:24:10 crc kubenswrapper[4772]: I1122 13:24:10.716838 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerStarted","Data":"0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1"} Nov 22 13:24:10 crc kubenswrapper[4772]: I1122 13:24:10.744533 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r2zdc" podStartSLOduration=3.256305468 podStartE2EDuration="10.744513777s" podCreationTimestamp="2025-11-22 13:24:00 +0000 UTC" firstStartedPulling="2025-11-22 13:24:02.631729058 +0000 UTC m=+9962.871173552" lastFinishedPulling="2025-11-22 13:24:10.119937367 +0000 UTC m=+9970.359381861" observedRunningTime="2025-11-22 13:24:10.734626374 +0000 UTC m=+9970.974070868" watchObservedRunningTime="2025-11-22 13:24:10.744513777 +0000 UTC m=+9970.983958271" Nov 22 13:24:10 crc kubenswrapper[4772]: I1122 13:24:10.783866 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:10 crc kubenswrapper[4772]: I1122 13:24:10.783931 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:11 crc kubenswrapper[4772]: I1122 13:24:11.832496 4772 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2zdc" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="registry-server" probeResult="failure" output=< Nov 22 13:24:11 crc kubenswrapper[4772]: timeout: failed to connect service ":50051" within 1s Nov 22 13:24:11 crc kubenswrapper[4772]: > Nov 22 13:24:13 crc kubenswrapper[4772]: I1122 13:24:13.414796 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:24:13 crc kubenswrapper[4772]: E1122 13:24:13.415346 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:24:20 crc kubenswrapper[4772]: I1122 13:24:20.830013 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:20 crc kubenswrapper[4772]: I1122 13:24:20.883651 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:21 crc kubenswrapper[4772]: I1122 13:24:21.064717 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2zdc"] Nov 22 13:24:22 crc kubenswrapper[4772]: I1122 13:24:22.846809 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r2zdc" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="registry-server" containerID="cri-o://0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1" gracePeriod=2 Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.348192 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.477448 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58rcd\" (UniqueName: \"kubernetes.io/projected/5ce8719d-ba6d-429c-981b-5804c4f01199-kube-api-access-58rcd\") pod \"5ce8719d-ba6d-429c-981b-5804c4f01199\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.477657 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-catalog-content\") pod \"5ce8719d-ba6d-429c-981b-5804c4f01199\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.478422 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-utilities\") pod \"5ce8719d-ba6d-429c-981b-5804c4f01199\" (UID: \"5ce8719d-ba6d-429c-981b-5804c4f01199\") " Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.480609 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-utilities" (OuterVolumeSpecName: "utilities") pod "5ce8719d-ba6d-429c-981b-5804c4f01199" (UID: "5ce8719d-ba6d-429c-981b-5804c4f01199"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.492277 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce8719d-ba6d-429c-981b-5804c4f01199-kube-api-access-58rcd" (OuterVolumeSpecName: "kube-api-access-58rcd") pod "5ce8719d-ba6d-429c-981b-5804c4f01199" (UID: "5ce8719d-ba6d-429c-981b-5804c4f01199"). InnerVolumeSpecName "kube-api-access-58rcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.573715 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ce8719d-ba6d-429c-981b-5804c4f01199" (UID: "5ce8719d-ba6d-429c-981b-5804c4f01199"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.581263 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58rcd\" (UniqueName: \"kubernetes.io/projected/5ce8719d-ba6d-429c-981b-5804c4f01199-kube-api-access-58rcd\") on node \"crc\" DevicePath \"\"" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.581294 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.581306 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ce8719d-ba6d-429c-981b-5804c4f01199-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.863290 4772 generic.go:334] "Generic (PLEG): container finished" podID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerID="0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1" exitCode=0 Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.863337 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerDied","Data":"0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1"} Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.863366 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2zdc" event={"ID":"5ce8719d-ba6d-429c-981b-5804c4f01199","Type":"ContainerDied","Data":"3e4a77e0ab682004abfa4f368a6a23f2f16b1d8c24740f4da1b58fc2a2841884"} Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.863385 4772 scope.go:117] "RemoveContainer" containerID="0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.863550 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2zdc" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.891202 4772 scope.go:117] "RemoveContainer" containerID="85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.912176 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2zdc"] Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.926130 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r2zdc"] Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.933000 4772 scope.go:117] "RemoveContainer" containerID="6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.967308 4772 scope.go:117] "RemoveContainer" containerID="0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1" Nov 22 13:24:23 crc kubenswrapper[4772]: E1122 13:24:23.968135 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1\": container with ID starting with 0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1 not found: ID does not exist" containerID="0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.968181 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1"} err="failed to get container status \"0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1\": rpc error: code = NotFound desc = could not find container \"0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1\": container with ID starting with 0bad72e8798e10e5c6b4d7502109f93ba1a7cb10d62c2a8de77f2767fa3ddde1 not found: ID does not exist" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.968210 4772 scope.go:117] "RemoveContainer" containerID="85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428" Nov 22 13:24:23 crc kubenswrapper[4772]: E1122 13:24:23.968644 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428\": container with ID starting with 85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428 not found: ID does not exist" containerID="85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.968690 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428"} err="failed to get container status \"85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428\": rpc error: code = NotFound desc = could not find container \"85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428\": container with ID starting with 85ef8cd68b5ca0453adac9220a2ccb2ac8795724b31d8d03057725ef2914c428 not found: ID does not exist" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.968720 4772 scope.go:117] "RemoveContainer" containerID="6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f" Nov 22 13:24:23 crc kubenswrapper[4772]: E1122 13:24:23.969189 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f\": container with ID starting with 6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f not found: ID does not exist" containerID="6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f" Nov 22 13:24:23 crc kubenswrapper[4772]: I1122 13:24:23.969218 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f"} err="failed to get container status \"6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f\": rpc error: code = NotFound desc = could not find container \"6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f\": container with ID starting with 6348acde185eff7338a25f50666952e45b0ccd230e8ccf2ce0997aba485c316f not found: ID does not exist" Nov 22 13:24:25 crc kubenswrapper[4772]: I1122 13:24:25.436794 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" path="/var/lib/kubelet/pods/5ce8719d-ba6d-429c-981b-5804c4f01199/volumes" Nov 22 13:24:25 crc kubenswrapper[4772]: I1122 13:24:25.723167 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-s5jrv_21aaa49e-824d-4f20-ad4d-aea95671788e/kube-rbac-proxy/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.010436 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-frr-files/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.169736 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-s5jrv_21aaa49e-824d-4f20-ad4d-aea95671788e/controller/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.193872 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-reloader/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.231489 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-frr-files/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.241263 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-metrics/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.372574 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-reloader/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.544392 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-reloader/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.571948 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-frr-files/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.577613 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-metrics/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.589452 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-metrics/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.745754 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-frr-files/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.770259 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-reloader/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.783328 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/controller/0.log" Nov 22 13:24:26 crc kubenswrapper[4772]: I1122 13:24:26.809649 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/cp-metrics/0.log" Nov 22 13:24:27 crc kubenswrapper[4772]: I1122 13:24:27.414211 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:24:27 crc kubenswrapper[4772]: E1122 13:24:27.414944 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:24:27 crc kubenswrapper[4772]: I1122 13:24:27.497390 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/kube-rbac-proxy-frr/0.log" Nov 22 13:24:27 crc kubenswrapper[4772]: I1122 13:24:27.517097 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/frr-metrics/0.log" Nov 22 13:24:27 crc kubenswrapper[4772]: I1122 13:24:27.573080 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/kube-rbac-proxy/0.log" Nov 22 13:24:27 crc kubenswrapper[4772]: I1122 13:24:27.748400 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/reloader/0.log" Nov 22 13:24:27 crc kubenswrapper[4772]: I1122 13:24:27.831529 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-nx26w_baf46c6b-a916-4f81-bfd6-447533e1fa95/frr-k8s-webhook-server/0.log" Nov 22 13:24:28 crc kubenswrapper[4772]: I1122 13:24:28.050244 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c7dcf7d55-rgxxw_b6738e49-fadc-4415-ad91-d6825e908eda/manager/0.log" Nov 22 13:24:28 crc kubenswrapper[4772]: I1122 13:24:28.269679 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6bb4f57cb7-6rncx_d3d9ca1d-f272-4a78-8da0-809487360415/webhook-server/0.log" Nov 22 13:24:28 crc kubenswrapper[4772]: I1122 13:24:28.392785 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wf9qr_72c6d910-537c-4020-a0e0-a38fe68636ac/kube-rbac-proxy/0.log" Nov 22 13:24:29 crc kubenswrapper[4772]: I1122 13:24:29.348786 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wf9qr_72c6d910-537c-4020-a0e0-a38fe68636ac/speaker/0.log" Nov 22 13:24:30 crc kubenswrapper[4772]: I1122 13:24:30.863423 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hj25m_0291a442-8b6c-4406-af22-55f24572ffe3/frr/0.log" Nov 22 13:24:39 crc kubenswrapper[4772]: I1122 13:24:39.414622 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:24:39 crc kubenswrapper[4772]: E1122 13:24:39.415755 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:24:42 crc kubenswrapper[4772]: I1122 13:24:42.781972 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk_55a36e51-acb5-44e1-9394-a1280c867770/util/0.log" Nov 22 13:24:42 crc kubenswrapper[4772]: I1122 13:24:42.978944 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk_55a36e51-acb5-44e1-9394-a1280c867770/util/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.068515 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk_55a36e51-acb5-44e1-9394-a1280c867770/pull/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.133549 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk_55a36e51-acb5-44e1-9394-a1280c867770/pull/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.339721 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk_55a36e51-acb5-44e1-9394-a1280c867770/extract/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.397368 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk_55a36e51-acb5-44e1-9394-a1280c867770/util/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.406005 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931amhlqk_55a36e51-acb5-44e1-9394-a1280c867770/pull/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.582774 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf_a7219a3f-650e-4d7a-b44e-f48aeb8e710b/util/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.778288 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf_a7219a3f-650e-4d7a-b44e-f48aeb8e710b/pull/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.822723 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf_a7219a3f-650e-4d7a-b44e-f48aeb8e710b/pull/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.839423 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf_a7219a3f-650e-4d7a-b44e-f48aeb8e710b/util/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.989113 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf_a7219a3f-650e-4d7a-b44e-f48aeb8e710b/pull/0.log" Nov 22 13:24:43 crc kubenswrapper[4772]: I1122 13:24:43.999636 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf_a7219a3f-650e-4d7a-b44e-f48aeb8e710b/util/0.log" Nov 22 13:24:44 crc kubenswrapper[4772]: I1122 13:24:44.680282 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh_e38b27e1-2a6a-4017-9fad-b2d172ce1662/util/0.log" Nov 22 13:24:44 crc kubenswrapper[4772]: I1122 13:24:44.682344 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emccrf_a7219a3f-650e-4d7a-b44e-f48aeb8e710b/extract/0.log" Nov 22 13:24:44 crc kubenswrapper[4772]: I1122 13:24:44.867189 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh_e38b27e1-2a6a-4017-9fad-b2d172ce1662/util/0.log" Nov 22 13:24:44 crc kubenswrapper[4772]: I1122 13:24:44.882557 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh_e38b27e1-2a6a-4017-9fad-b2d172ce1662/pull/0.log" Nov 22 13:24:44 crc kubenswrapper[4772]: I1122 13:24:44.908616 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh_e38b27e1-2a6a-4017-9fad-b2d172ce1662/pull/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.077156 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh_e38b27e1-2a6a-4017-9fad-b2d172ce1662/util/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.090617 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh_e38b27e1-2a6a-4017-9fad-b2d172ce1662/extract/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.133668 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qfdzh_e38b27e1-2a6a-4017-9fad-b2d172ce1662/pull/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.286190 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2wk4_fe350605-8cb5-4937-9362-5c2ad43660c3/extract-utilities/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.548949 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2wk4_fe350605-8cb5-4937-9362-5c2ad43660c3/extract-utilities/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.560484 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2wk4_fe350605-8cb5-4937-9362-5c2ad43660c3/extract-content/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.583507 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2wk4_fe350605-8cb5-4937-9362-5c2ad43660c3/extract-content/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.732308 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2wk4_fe350605-8cb5-4937-9362-5c2ad43660c3/extract-utilities/0.log" Nov 22 13:24:45 crc kubenswrapper[4772]: I1122 13:24:45.764551 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2wk4_fe350605-8cb5-4937-9362-5c2ad43660c3/extract-content/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.019800 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqg5j_f6441995-690d-46c4-bd8c-56fa50f781cb/extract-utilities/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.160499 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqg5j_f6441995-690d-46c4-bd8c-56fa50f781cb/extract-utilities/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.268330 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqg5j_f6441995-690d-46c4-bd8c-56fa50f781cb/extract-content/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.273230 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqg5j_f6441995-690d-46c4-bd8c-56fa50f781cb/extract-content/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.465660 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqg5j_f6441995-690d-46c4-bd8c-56fa50f781cb/extract-utilities/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.625582 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqg5j_f6441995-690d-46c4-bd8c-56fa50f781cb/extract-content/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.696910 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8_d34f677f-77a5-4855-a8c8-3d561fa25a34/util/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.960454 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8_d34f677f-77a5-4855-a8c8-3d561fa25a34/util/0.log" Nov 22 13:24:46 crc kubenswrapper[4772]: I1122 13:24:46.976225 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8_d34f677f-77a5-4855-a8c8-3d561fa25a34/pull/0.log" Nov 22 13:24:47 crc kubenswrapper[4772]: I1122 13:24:47.008293 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8_d34f677f-77a5-4855-a8c8-3d561fa25a34/pull/0.log" Nov 22 13:24:47 crc kubenswrapper[4772]: I1122 13:24:47.447726 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8_d34f677f-77a5-4855-a8c8-3d561fa25a34/util/0.log" Nov 22 13:24:47 crc kubenswrapper[4772]: I1122 13:24:47.496863 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2wk4_fe350605-8cb5-4937-9362-5c2ad43660c3/registry-server/0.log" Nov 22 13:24:47 crc kubenswrapper[4772]: I1122 13:24:47.502003 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8_d34f677f-77a5-4855-a8c8-3d561fa25a34/pull/0.log" Nov 22 13:24:47 crc kubenswrapper[4772]: I1122 13:24:47.504091 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xm5p8_d34f677f-77a5-4855-a8c8-3d561fa25a34/extract/0.log" Nov 22 13:24:47 crc kubenswrapper[4772]: I1122 13:24:47.760948 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-54pmg_9ea14368-36cf-45d4-b161-1f4412f8d675/marketplace-operator/0.log" Nov 22 13:24:47 crc kubenswrapper[4772]: I1122 13:24:47.806810 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bws7w_c5fe9c3a-44e0-4f93-8911-383910a8854b/extract-utilities/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.017869 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bws7w_c5fe9c3a-44e0-4f93-8911-383910a8854b/extract-content/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.031982 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bws7w_c5fe9c3a-44e0-4f93-8911-383910a8854b/extract-content/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.033377 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bws7w_c5fe9c3a-44e0-4f93-8911-383910a8854b/extract-utilities/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.176479 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqg5j_f6441995-690d-46c4-bd8c-56fa50f781cb/registry-server/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.320168 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bws7w_c5fe9c3a-44e0-4f93-8911-383910a8854b/extract-utilities/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.395426 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bws7w_c5fe9c3a-44e0-4f93-8911-383910a8854b/extract-content/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.487329 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9d7hb_bb106d85-8cbf-4ca1-ac98-496f521882c2/extract-utilities/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.627410 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9d7hb_bb106d85-8cbf-4ca1-ac98-496f521882c2/extract-utilities/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.632388 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bws7w_c5fe9c3a-44e0-4f93-8911-383910a8854b/registry-server/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.636168 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9d7hb_bb106d85-8cbf-4ca1-ac98-496f521882c2/extract-content/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.704559 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9d7hb_bb106d85-8cbf-4ca1-ac98-496f521882c2/extract-content/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.820076 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9d7hb_bb106d85-8cbf-4ca1-ac98-496f521882c2/extract-utilities/0.log" Nov 22 13:24:48 crc kubenswrapper[4772]: I1122 13:24:48.861184 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9d7hb_bb106d85-8cbf-4ca1-ac98-496f521882c2/extract-content/0.log" Nov 22 13:24:50 crc kubenswrapper[4772]: I1122 13:24:50.172800 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9d7hb_bb106d85-8cbf-4ca1-ac98-496f521882c2/registry-server/0.log" Nov 22 13:24:52 crc kubenswrapper[4772]: I1122 13:24:52.414450 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:24:52 crc kubenswrapper[4772]: E1122 13:24:52.415460 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:25:03 crc kubenswrapper[4772]: I1122 13:25:03.206728 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-btm2b_3182734e-a9a5-4658-be8c-f6c32e1eef21/prometheus-operator/0.log" Nov 22 13:25:03 crc kubenswrapper[4772]: I1122 13:25:03.330892 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68d77686b6-nzh6j_0dd53222-e946-4ee9-8d69-a2b4fd675e99/prometheus-operator-admission-webhook/0.log" Nov 22 13:25:03 crc kubenswrapper[4772]: I1122 13:25:03.414575 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:25:03 crc kubenswrapper[4772]: E1122 13:25:03.414907 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:25:03 crc kubenswrapper[4772]: I1122 13:25:03.447050 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68d77686b6-xvpkf_02c13927-ba51-452d-a0ea-2a021cb4ce6c/prometheus-operator-admission-webhook/0.log" Nov 22 13:25:03 crc kubenswrapper[4772]: I1122 13:25:03.964494 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-plr84_2ff2c225-e7b9-4d2b-bb81-30147198a90d/operator/0.log" Nov 22 13:25:04 crc kubenswrapper[4772]: I1122 13:25:04.040446 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-2pfzn_643d51b2-9282-4236-b0dd-282638c093e2/perses-operator/0.log" Nov 22 13:25:18 crc kubenswrapper[4772]: I1122 13:25:18.414595 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:25:18 crc kubenswrapper[4772]: E1122 13:25:18.415846 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:25:29 crc kubenswrapper[4772]: I1122 13:25:29.418722 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:25:29 crc kubenswrapper[4772]: E1122 13:25:29.419898 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:25:43 crc kubenswrapper[4772]: I1122 13:25:43.414666 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:25:43 crc kubenswrapper[4772]: E1122 13:25:43.416004 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:25:54 crc kubenswrapper[4772]: I1122 13:25:54.414465 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:25:54 crc kubenswrapper[4772]: E1122 13:25:54.415364 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:26:05 crc kubenswrapper[4772]: I1122 13:26:05.416376 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:26:05 crc kubenswrapper[4772]: E1122 13:26:05.418478 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:26:18 crc kubenswrapper[4772]: I1122 13:26:18.415121 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:26:18 crc kubenswrapper[4772]: E1122 13:26:18.416364 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:26:30 crc kubenswrapper[4772]: I1122 13:26:30.413985 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:26:30 crc kubenswrapper[4772]: E1122 13:26:30.414842 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:26:45 crc kubenswrapper[4772]: I1122 13:26:45.414673 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:26:45 crc kubenswrapper[4772]: E1122 13:26:45.415520 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:26:57 crc kubenswrapper[4772]: I1122 13:26:57.417969 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:26:57 crc kubenswrapper[4772]: E1122 13:26:57.419132 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.815106 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-47l2c"] Nov 22 13:27:03 crc kubenswrapper[4772]: E1122 13:27:03.819286 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="registry-server" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.819330 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="registry-server" Nov 22 13:27:03 crc kubenswrapper[4772]: E1122 13:27:03.819365 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="extract-utilities" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.819374 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="extract-utilities" Nov 22 13:27:03 crc kubenswrapper[4772]: E1122 13:27:03.819422 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="extract-content" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.819428 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="extract-content" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.819666 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce8719d-ba6d-429c-981b-5804c4f01199" containerName="registry-server" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.821659 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.853807 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-47l2c"] Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.921067 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64pvf\" (UniqueName: \"kubernetes.io/projected/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-kube-api-access-64pvf\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.921224 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-catalog-content\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:03 crc kubenswrapper[4772]: I1122 13:27:03.921535 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-utilities\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.023863 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64pvf\" (UniqueName: \"kubernetes.io/projected/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-kube-api-access-64pvf\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.023971 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-catalog-content\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.024087 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-utilities\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.024660 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-catalog-content\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.024757 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-utilities\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.055897 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64pvf\" (UniqueName: \"kubernetes.io/projected/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-kube-api-access-64pvf\") pod \"certified-operators-47l2c\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.139829 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.776413 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-47l2c"] Nov 22 13:27:04 crc kubenswrapper[4772]: I1122 13:27:04.796799 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47l2c" event={"ID":"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc","Type":"ContainerStarted","Data":"ef8a0bff321c52682b70a23ce7e8b0424da4d8d632c862de31111bab107a8501"} Nov 22 13:27:05 crc kubenswrapper[4772]: I1122 13:27:05.809445 4772 generic.go:334] "Generic (PLEG): container finished" podID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerID="37141ed09dac68e3fbd7f0db6140ac63e319e11249a3b8d6ab0a4faec4ec9bae" exitCode=0 Nov 22 13:27:05 crc kubenswrapper[4772]: I1122 13:27:05.809703 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47l2c" event={"ID":"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc","Type":"ContainerDied","Data":"37141ed09dac68e3fbd7f0db6140ac63e319e11249a3b8d6ab0a4faec4ec9bae"} Nov 22 13:27:06 crc kubenswrapper[4772]: I1122 13:27:06.821445 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47l2c" event={"ID":"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc","Type":"ContainerStarted","Data":"179cd5b998bb974a8e2f080959105d095ca44f2089483ff7ae0aa001e53c5906"} Nov 22 13:27:08 crc kubenswrapper[4772]: I1122 13:27:08.851252 4772 generic.go:334] "Generic (PLEG): container finished" podID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerID="179cd5b998bb974a8e2f080959105d095ca44f2089483ff7ae0aa001e53c5906" exitCode=0 Nov 22 13:27:08 crc kubenswrapper[4772]: I1122 13:27:08.853986 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47l2c" event={"ID":"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc","Type":"ContainerDied","Data":"179cd5b998bb974a8e2f080959105d095ca44f2089483ff7ae0aa001e53c5906"} Nov 22 13:27:09 crc kubenswrapper[4772]: I1122 13:27:09.865912 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47l2c" event={"ID":"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc","Type":"ContainerStarted","Data":"23fc46227d0bcb2eb0a13188c119f0d3bdc5d5acf4821ea76033fcdefb81134c"} Nov 22 13:27:09 crc kubenswrapper[4772]: I1122 13:27:09.887559 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-47l2c" podStartSLOduration=3.344578592 podStartE2EDuration="6.887527057s" podCreationTimestamp="2025-11-22 13:27:03 +0000 UTC" firstStartedPulling="2025-11-22 13:27:05.812332901 +0000 UTC m=+10146.051777395" lastFinishedPulling="2025-11-22 13:27:09.355281366 +0000 UTC m=+10149.594725860" observedRunningTime="2025-11-22 13:27:09.883914908 +0000 UTC m=+10150.123359422" watchObservedRunningTime="2025-11-22 13:27:09.887527057 +0000 UTC m=+10150.126971561" Nov 22 13:27:12 crc kubenswrapper[4772]: I1122 13:27:12.413547 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:27:12 crc kubenswrapper[4772]: E1122 13:27:12.414990 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:27:14 crc kubenswrapper[4772]: I1122 13:27:14.141007 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:14 crc kubenswrapper[4772]: I1122 13:27:14.141642 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:14 crc kubenswrapper[4772]: I1122 13:27:14.549700 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:15 crc kubenswrapper[4772]: I1122 13:27:15.016195 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:15 crc kubenswrapper[4772]: I1122 13:27:15.094602 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-47l2c"] Nov 22 13:27:16 crc kubenswrapper[4772]: I1122 13:27:16.966079 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-47l2c" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="registry-server" containerID="cri-o://23fc46227d0bcb2eb0a13188c119f0d3bdc5d5acf4821ea76033fcdefb81134c" gracePeriod=2 Nov 22 13:27:17 crc kubenswrapper[4772]: I1122 13:27:17.980277 4772 generic.go:334] "Generic (PLEG): container finished" podID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerID="23fc46227d0bcb2eb0a13188c119f0d3bdc5d5acf4821ea76033fcdefb81134c" exitCode=0 Nov 22 13:27:17 crc kubenswrapper[4772]: I1122 13:27:17.980391 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47l2c" event={"ID":"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc","Type":"ContainerDied","Data":"23fc46227d0bcb2eb0a13188c119f0d3bdc5d5acf4821ea76033fcdefb81134c"} Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.663434 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.734356 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64pvf\" (UniqueName: \"kubernetes.io/projected/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-kube-api-access-64pvf\") pod \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.734428 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-catalog-content\") pod \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.734894 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-utilities\") pod \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\" (UID: \"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc\") " Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.736594 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-utilities" (OuterVolumeSpecName: "utilities") pod "c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" (UID: "c2f3ed41-9e7d-4f8d-9734-5dd686f266bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.742909 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-kube-api-access-64pvf" (OuterVolumeSpecName: "kube-api-access-64pvf") pod "c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" (UID: "c2f3ed41-9e7d-4f8d-9734-5dd686f266bc"). InnerVolumeSpecName "kube-api-access-64pvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.785404 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" (UID: "c2f3ed41-9e7d-4f8d-9734-5dd686f266bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.838058 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.838097 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64pvf\" (UniqueName: \"kubernetes.io/projected/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-kube-api-access-64pvf\") on node \"crc\" DevicePath \"\"" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.838111 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.994853 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-47l2c" event={"ID":"c2f3ed41-9e7d-4f8d-9734-5dd686f266bc","Type":"ContainerDied","Data":"ef8a0bff321c52682b70a23ce7e8b0424da4d8d632c862de31111bab107a8501"} Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.994923 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-47l2c" Nov 22 13:27:18 crc kubenswrapper[4772]: I1122 13:27:18.995322 4772 scope.go:117] "RemoveContainer" containerID="23fc46227d0bcb2eb0a13188c119f0d3bdc5d5acf4821ea76033fcdefb81134c" Nov 22 13:27:19 crc kubenswrapper[4772]: I1122 13:27:19.048184 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-47l2c"] Nov 22 13:27:19 crc kubenswrapper[4772]: I1122 13:27:19.052485 4772 scope.go:117] "RemoveContainer" containerID="179cd5b998bb974a8e2f080959105d095ca44f2089483ff7ae0aa001e53c5906" Nov 22 13:27:19 crc kubenswrapper[4772]: I1122 13:27:19.061872 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-47l2c"] Nov 22 13:27:19 crc kubenswrapper[4772]: I1122 13:27:19.092440 4772 scope.go:117] "RemoveContainer" containerID="37141ed09dac68e3fbd7f0db6140ac63e319e11249a3b8d6ab0a4faec4ec9bae" Nov 22 13:27:19 crc kubenswrapper[4772]: I1122 13:27:19.432879 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" path="/var/lib/kubelet/pods/c2f3ed41-9e7d-4f8d-9734-5dd686f266bc/volumes" Nov 22 13:27:23 crc kubenswrapper[4772]: I1122 13:27:23.040175 4772 generic.go:334] "Generic (PLEG): container finished" podID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerID="96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81" exitCode=0 Nov 22 13:27:23 crc kubenswrapper[4772]: I1122 13:27:23.040257 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27gls/must-gather-4xhjb" event={"ID":"3eaf5b5d-ff81-49f8-accc-2cce8543916a","Type":"ContainerDied","Data":"96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81"} Nov 22 13:27:23 crc kubenswrapper[4772]: I1122 13:27:23.041560 4772 scope.go:117] "RemoveContainer" containerID="96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81" Nov 22 13:27:23 crc kubenswrapper[4772]: I1122 13:27:23.413854 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:27:23 crc kubenswrapper[4772]: E1122 13:27:23.414169 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:27:23 crc kubenswrapper[4772]: I1122 13:27:23.560587 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-27gls_must-gather-4xhjb_3eaf5b5d-ff81-49f8-accc-2cce8543916a/gather/0.log" Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.280850 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-27gls/must-gather-4xhjb"] Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.281806 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-27gls/must-gather-4xhjb" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerName="copy" containerID="cri-o://ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36" gracePeriod=2 Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.291343 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-27gls/must-gather-4xhjb"] Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.795254 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-27gls_must-gather-4xhjb_3eaf5b5d-ff81-49f8-accc-2cce8543916a/copy/0.log" Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.796239 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.922213 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3eaf5b5d-ff81-49f8-accc-2cce8543916a-must-gather-output\") pod \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.922548 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46gvl\" (UniqueName: \"kubernetes.io/projected/3eaf5b5d-ff81-49f8-accc-2cce8543916a-kube-api-access-46gvl\") pod \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\" (UID: \"3eaf5b5d-ff81-49f8-accc-2cce8543916a\") " Nov 22 13:27:32 crc kubenswrapper[4772]: I1122 13:27:32.930618 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eaf5b5d-ff81-49f8-accc-2cce8543916a-kube-api-access-46gvl" (OuterVolumeSpecName: "kube-api-access-46gvl") pod "3eaf5b5d-ff81-49f8-accc-2cce8543916a" (UID: "3eaf5b5d-ff81-49f8-accc-2cce8543916a"). InnerVolumeSpecName "kube-api-access-46gvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.028280 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46gvl\" (UniqueName: \"kubernetes.io/projected/3eaf5b5d-ff81-49f8-accc-2cce8543916a-kube-api-access-46gvl\") on node \"crc\" DevicePath \"\"" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.152056 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eaf5b5d-ff81-49f8-accc-2cce8543916a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3eaf5b5d-ff81-49f8-accc-2cce8543916a" (UID: "3eaf5b5d-ff81-49f8-accc-2cce8543916a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.216197 4772 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-27gls_must-gather-4xhjb_3eaf5b5d-ff81-49f8-accc-2cce8543916a/copy/0.log" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.216696 4772 generic.go:334] "Generic (PLEG): container finished" podID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerID="ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36" exitCode=143 Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.216747 4772 scope.go:117] "RemoveContainer" containerID="ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.216927 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27gls/must-gather-4xhjb" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.233711 4772 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3eaf5b5d-ff81-49f8-accc-2cce8543916a-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.255532 4772 scope.go:117] "RemoveContainer" containerID="96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.313853 4772 scope.go:117] "RemoveContainer" containerID="ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36" Nov 22 13:27:33 crc kubenswrapper[4772]: E1122 13:27:33.314743 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36\": container with ID starting with ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36 not found: ID does not exist" containerID="ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.314822 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36"} err="failed to get container status \"ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36\": rpc error: code = NotFound desc = could not find container \"ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36\": container with ID starting with ef525f756473a5c187f385f77b43956dba968bd231a787f9e3b9b047b53cbe36 not found: ID does not exist" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.314873 4772 scope.go:117] "RemoveContainer" containerID="96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81" Nov 22 13:27:33 crc kubenswrapper[4772]: E1122 13:27:33.315383 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81\": container with ID starting with 96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81 not found: ID does not exist" containerID="96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.315422 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81"} err="failed to get container status \"96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81\": rpc error: code = NotFound desc = could not find container \"96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81\": container with ID starting with 96e1f97afdb45103510a6a1f9987f0aa0dd48f741b7c0d1a351a69343782df81 not found: ID does not exist" Nov 22 13:27:33 crc kubenswrapper[4772]: I1122 13:27:33.427514 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" path="/var/lib/kubelet/pods/3eaf5b5d-ff81-49f8-accc-2cce8543916a/volumes" Nov 22 13:27:34 crc kubenswrapper[4772]: I1122 13:27:34.414174 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:27:34 crc kubenswrapper[4772]: E1122 13:27:34.414874 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:27:46 crc kubenswrapper[4772]: I1122 13:27:46.656297 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:27:46 crc kubenswrapper[4772]: E1122 13:27:46.657825 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:28:01 crc kubenswrapper[4772]: I1122 13:28:01.428939 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:28:01 crc kubenswrapper[4772]: E1122 13:28:01.430722 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:28:13 crc kubenswrapper[4772]: I1122 13:28:13.414536 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:28:13 crc kubenswrapper[4772]: E1122 13:28:13.415670 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:28:27 crc kubenswrapper[4772]: I1122 13:28:27.414749 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:28:27 crc kubenswrapper[4772]: E1122 13:28:27.416189 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:28:38 crc kubenswrapper[4772]: I1122 13:28:38.414244 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:28:38 crc kubenswrapper[4772]: E1122 13:28:38.415085 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:28:50 crc kubenswrapper[4772]: I1122 13:28:50.414088 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:28:50 crc kubenswrapper[4772]: E1122 13:28:50.415061 4772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wwshd_openshift-machine-config-operator(2386c238-461f-4956-940f-ac3c26eb052e)\"" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" podUID="2386c238-461f-4956-940f-ac3c26eb052e" Nov 22 13:29:04 crc kubenswrapper[4772]: I1122 13:29:04.414339 4772 scope.go:117] "RemoveContainer" containerID="985c9c30ae52d69eeeca49c18b0f0c38c324832a5272046e265464e7846e64d6" Nov 22 13:29:05 crc kubenswrapper[4772]: I1122 13:29:05.398287 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wwshd" event={"ID":"2386c238-461f-4956-940f-ac3c26eb052e","Type":"ContainerStarted","Data":"45805eed102fc906cf199c6c93eff62445b78509b1c65c037060d596d95e895a"} Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.300274 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6d2kj"] Nov 22 13:29:57 crc kubenswrapper[4772]: E1122 13:29:57.302085 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="extract-utilities" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302113 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="extract-utilities" Nov 22 13:29:57 crc kubenswrapper[4772]: E1122 13:29:57.302163 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="registry-server" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302177 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="registry-server" Nov 22 13:29:57 crc kubenswrapper[4772]: E1122 13:29:57.302204 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerName="copy" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302218 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerName="copy" Nov 22 13:29:57 crc kubenswrapper[4772]: E1122 13:29:57.302250 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="extract-content" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302264 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="extract-content" Nov 22 13:29:57 crc kubenswrapper[4772]: E1122 13:29:57.302303 4772 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerName="gather" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302316 4772 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerName="gather" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302689 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerName="gather" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302729 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f3ed41-9e7d-4f8d-9734-5dd686f266bc" containerName="registry-server" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.302770 4772 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eaf5b5d-ff81-49f8-accc-2cce8543916a" containerName="copy" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.305927 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.321218 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d2kj"] Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.411174 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wvd\" (UniqueName: \"kubernetes.io/projected/8924984b-5395-4414-8d3f-b20ba20d927b-kube-api-access-v6wvd\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.412072 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-utilities\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.412164 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-catalog-content\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.514714 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wvd\" (UniqueName: \"kubernetes.io/projected/8924984b-5395-4414-8d3f-b20ba20d927b-kube-api-access-v6wvd\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.514858 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-utilities\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.514913 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-catalog-content\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.515554 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-catalog-content\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:57 crc kubenswrapper[4772]: I1122 13:29:57.515903 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-utilities\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:58 crc kubenswrapper[4772]: I1122 13:29:58.358499 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wvd\" (UniqueName: \"kubernetes.io/projected/8924984b-5395-4414-8d3f-b20ba20d927b-kube-api-access-v6wvd\") pod \"community-operators-6d2kj\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:58 crc kubenswrapper[4772]: I1122 13:29:58.549088 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:29:59 crc kubenswrapper[4772]: I1122 13:29:59.077507 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d2kj"] Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.028476 4772 generic.go:334] "Generic (PLEG): container finished" podID="8924984b-5395-4414-8d3f-b20ba20d927b" containerID="cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8" exitCode=0 Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.028550 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d2kj" event={"ID":"8924984b-5395-4414-8d3f-b20ba20d927b","Type":"ContainerDied","Data":"cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8"} Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.028910 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d2kj" event={"ID":"8924984b-5395-4414-8d3f-b20ba20d927b","Type":"ContainerStarted","Data":"10d2c6fee826531a0715f1aa3d34693bfabfb7b531546657e23a85897ded03cc"} Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.031629 4772 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.189692 4772 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd"] Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.194391 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.198068 4772 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.198343 4772 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.234122 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd"] Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.297892 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2838867-12b3-49cc-b3af-0ed597977f73-secret-volume\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.298083 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2838867-12b3-49cc-b3af-0ed597977f73-config-volume\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.298314 4772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46xk\" (UniqueName: \"kubernetes.io/projected/d2838867-12b3-49cc-b3af-0ed597977f73-kube-api-access-m46xk\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.401753 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2838867-12b3-49cc-b3af-0ed597977f73-config-volume\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.402162 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m46xk\" (UniqueName: \"kubernetes.io/projected/d2838867-12b3-49cc-b3af-0ed597977f73-kube-api-access-m46xk\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.402351 4772 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2838867-12b3-49cc-b3af-0ed597977f73-secret-volume\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.403112 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2838867-12b3-49cc-b3af-0ed597977f73-config-volume\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.751734 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2838867-12b3-49cc-b3af-0ed597977f73-secret-volume\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.753319 4772 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m46xk\" (UniqueName: \"kubernetes.io/projected/d2838867-12b3-49cc-b3af-0ed597977f73-kube-api-access-m46xk\") pod \"collect-profiles-29396970-52wdd\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:00 crc kubenswrapper[4772]: I1122 13:30:00.860234 4772 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:01 crc kubenswrapper[4772]: I1122 13:30:01.374632 4772 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd"] Nov 22 13:30:02 crc kubenswrapper[4772]: I1122 13:30:02.066428 4772 generic.go:334] "Generic (PLEG): container finished" podID="d2838867-12b3-49cc-b3af-0ed597977f73" containerID="f03764be8ad27f2f3c5d1f8e205367596601fb55328c8ffdaf15eaba6aa5dbad" exitCode=0 Nov 22 13:30:02 crc kubenswrapper[4772]: I1122 13:30:02.068684 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" event={"ID":"d2838867-12b3-49cc-b3af-0ed597977f73","Type":"ContainerDied","Data":"f03764be8ad27f2f3c5d1f8e205367596601fb55328c8ffdaf15eaba6aa5dbad"} Nov 22 13:30:02 crc kubenswrapper[4772]: I1122 13:30:02.068850 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" event={"ID":"d2838867-12b3-49cc-b3af-0ed597977f73","Type":"ContainerStarted","Data":"15b2080ec2bfc03ad4a5121e4a96d570f591d9d6b433f92737729326c53c46e5"} Nov 22 13:30:02 crc kubenswrapper[4772]: I1122 13:30:02.073719 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d2kj" event={"ID":"8924984b-5395-4414-8d3f-b20ba20d927b","Type":"ContainerStarted","Data":"c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca"} Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.086403 4772 generic.go:334] "Generic (PLEG): container finished" podID="8924984b-5395-4414-8d3f-b20ba20d927b" containerID="c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca" exitCode=0 Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.086458 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d2kj" event={"ID":"8924984b-5395-4414-8d3f-b20ba20d927b","Type":"ContainerDied","Data":"c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca"} Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.502385 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.583728 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2838867-12b3-49cc-b3af-0ed597977f73-config-volume\") pod \"d2838867-12b3-49cc-b3af-0ed597977f73\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.584249 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m46xk\" (UniqueName: \"kubernetes.io/projected/d2838867-12b3-49cc-b3af-0ed597977f73-kube-api-access-m46xk\") pod \"d2838867-12b3-49cc-b3af-0ed597977f73\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.584416 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2838867-12b3-49cc-b3af-0ed597977f73-secret-volume\") pod \"d2838867-12b3-49cc-b3af-0ed597977f73\" (UID: \"d2838867-12b3-49cc-b3af-0ed597977f73\") " Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.584689 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2838867-12b3-49cc-b3af-0ed597977f73-config-volume" (OuterVolumeSpecName: "config-volume") pod "d2838867-12b3-49cc-b3af-0ed597977f73" (UID: "d2838867-12b3-49cc-b3af-0ed597977f73"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.585205 4772 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2838867-12b3-49cc-b3af-0ed597977f73-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.592803 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2838867-12b3-49cc-b3af-0ed597977f73-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d2838867-12b3-49cc-b3af-0ed597977f73" (UID: "d2838867-12b3-49cc-b3af-0ed597977f73"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.594335 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2838867-12b3-49cc-b3af-0ed597977f73-kube-api-access-m46xk" (OuterVolumeSpecName: "kube-api-access-m46xk") pod "d2838867-12b3-49cc-b3af-0ed597977f73" (UID: "d2838867-12b3-49cc-b3af-0ed597977f73"). InnerVolumeSpecName "kube-api-access-m46xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.689559 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m46xk\" (UniqueName: \"kubernetes.io/projected/d2838867-12b3-49cc-b3af-0ed597977f73-kube-api-access-m46xk\") on node \"crc\" DevicePath \"\"" Nov 22 13:30:03 crc kubenswrapper[4772]: I1122 13:30:03.689852 4772 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2838867-12b3-49cc-b3af-0ed597977f73-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 13:30:04 crc kubenswrapper[4772]: I1122 13:30:04.098706 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" Nov 22 13:30:04 crc kubenswrapper[4772]: I1122 13:30:04.098683 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396970-52wdd" event={"ID":"d2838867-12b3-49cc-b3af-0ed597977f73","Type":"ContainerDied","Data":"15b2080ec2bfc03ad4a5121e4a96d570f591d9d6b433f92737729326c53c46e5"} Nov 22 13:30:04 crc kubenswrapper[4772]: I1122 13:30:04.099318 4772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15b2080ec2bfc03ad4a5121e4a96d570f591d9d6b433f92737729326c53c46e5" Nov 22 13:30:04 crc kubenswrapper[4772]: I1122 13:30:04.101032 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d2kj" event={"ID":"8924984b-5395-4414-8d3f-b20ba20d927b","Type":"ContainerStarted","Data":"ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663"} Nov 22 13:30:04 crc kubenswrapper[4772]: I1122 13:30:04.140392 4772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6d2kj" podStartSLOduration=3.6870583139999997 podStartE2EDuration="7.140362905s" podCreationTimestamp="2025-11-22 13:29:57 +0000 UTC" firstStartedPulling="2025-11-22 13:30:00.031393089 +0000 UTC m=+10320.270837583" lastFinishedPulling="2025-11-22 13:30:03.48469768 +0000 UTC m=+10323.724142174" observedRunningTime="2025-11-22 13:30:04.131576209 +0000 UTC m=+10324.371020723" watchObservedRunningTime="2025-11-22 13:30:04.140362905 +0000 UTC m=+10324.379807399" Nov 22 13:30:04 crc kubenswrapper[4772]: I1122 13:30:04.583107 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w"] Nov 22 13:30:04 crc kubenswrapper[4772]: I1122 13:30:04.596804 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396925-s7j6w"] Nov 22 13:30:05 crc kubenswrapper[4772]: I1122 13:30:05.429681 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46bb21b-0c5a-418b-8b05-244477414c43" path="/var/lib/kubelet/pods/a46bb21b-0c5a-418b-8b05-244477414c43/volumes" Nov 22 13:30:08 crc kubenswrapper[4772]: I1122 13:30:08.549836 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:30:08 crc kubenswrapper[4772]: I1122 13:30:08.550599 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:30:08 crc kubenswrapper[4772]: I1122 13:30:08.607630 4772 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:30:09 crc kubenswrapper[4772]: I1122 13:30:09.198837 4772 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:30:09 crc kubenswrapper[4772]: I1122 13:30:09.252594 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6d2kj"] Nov 22 13:30:11 crc kubenswrapper[4772]: I1122 13:30:11.178538 4772 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6d2kj" podUID="8924984b-5395-4414-8d3f-b20ba20d927b" containerName="registry-server" containerID="cri-o://ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663" gracePeriod=2 Nov 22 13:30:11 crc kubenswrapper[4772]: I1122 13:30:11.809185 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:30:11 crc kubenswrapper[4772]: I1122 13:30:11.993125 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-catalog-content\") pod \"8924984b-5395-4414-8d3f-b20ba20d927b\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " Nov 22 13:30:11 crc kubenswrapper[4772]: I1122 13:30:11.993326 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-utilities\") pod \"8924984b-5395-4414-8d3f-b20ba20d927b\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " Nov 22 13:30:11 crc kubenswrapper[4772]: I1122 13:30:11.993473 4772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6wvd\" (UniqueName: \"kubernetes.io/projected/8924984b-5395-4414-8d3f-b20ba20d927b-kube-api-access-v6wvd\") pod \"8924984b-5395-4414-8d3f-b20ba20d927b\" (UID: \"8924984b-5395-4414-8d3f-b20ba20d927b\") " Nov 22 13:30:11 crc kubenswrapper[4772]: I1122 13:30:11.994421 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-utilities" (OuterVolumeSpecName: "utilities") pod "8924984b-5395-4414-8d3f-b20ba20d927b" (UID: "8924984b-5395-4414-8d3f-b20ba20d927b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.042232 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8924984b-5395-4414-8d3f-b20ba20d927b" (UID: "8924984b-5395-4414-8d3f-b20ba20d927b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.095793 4772 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.095832 4772 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924984b-5395-4414-8d3f-b20ba20d927b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.199075 4772 generic.go:334] "Generic (PLEG): container finished" podID="8924984b-5395-4414-8d3f-b20ba20d927b" containerID="ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663" exitCode=0 Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.199131 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d2kj" event={"ID":"8924984b-5395-4414-8d3f-b20ba20d927b","Type":"ContainerDied","Data":"ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663"} Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.199190 4772 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d2kj" event={"ID":"8924984b-5395-4414-8d3f-b20ba20d927b","Type":"ContainerDied","Data":"10d2c6fee826531a0715f1aa3d34693bfabfb7b531546657e23a85897ded03cc"} Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.199204 4772 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d2kj" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.199211 4772 scope.go:117] "RemoveContainer" containerID="ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.229536 4772 scope.go:117] "RemoveContainer" containerID="c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.542863 4772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8924984b-5395-4414-8d3f-b20ba20d927b-kube-api-access-v6wvd" (OuterVolumeSpecName: "kube-api-access-v6wvd") pod "8924984b-5395-4414-8d3f-b20ba20d927b" (UID: "8924984b-5395-4414-8d3f-b20ba20d927b"). InnerVolumeSpecName "kube-api-access-v6wvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.557872 4772 scope.go:117] "RemoveContainer" containerID="cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.605513 4772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6wvd\" (UniqueName: \"kubernetes.io/projected/8924984b-5395-4414-8d3f-b20ba20d927b-kube-api-access-v6wvd\") on node \"crc\" DevicePath \"\"" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.652410 4772 scope.go:117] "RemoveContainer" containerID="ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663" Nov 22 13:30:12 crc kubenswrapper[4772]: E1122 13:30:12.652946 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663\": container with ID starting with ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663 not found: ID does not exist" containerID="ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.652990 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663"} err="failed to get container status \"ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663\": rpc error: code = NotFound desc = could not find container \"ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663\": container with ID starting with ec48a61903c78a1e3b83034bc999b07ced54bf8415cf353bfd308119b143e663 not found: ID does not exist" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.653016 4772 scope.go:117] "RemoveContainer" containerID="c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca" Nov 22 13:30:12 crc kubenswrapper[4772]: E1122 13:30:12.653622 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca\": container with ID starting with c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca not found: ID does not exist" containerID="c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.653674 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca"} err="failed to get container status \"c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca\": rpc error: code = NotFound desc = could not find container \"c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca\": container with ID starting with c183bf883d7459f7ac2df799ffe34284565de6d60e539782e4e9af0b749910ca not found: ID does not exist" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.653742 4772 scope.go:117] "RemoveContainer" containerID="cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8" Nov 22 13:30:12 crc kubenswrapper[4772]: E1122 13:30:12.654406 4772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8\": container with ID starting with cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8 not found: ID does not exist" containerID="cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.654503 4772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8"} err="failed to get container status \"cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8\": rpc error: code = NotFound desc = could not find container \"cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8\": container with ID starting with cd1a0bd29b3c609fdc08db87c1ab0610205e465420a59c88b002313ae42a31d8 not found: ID does not exist" Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.845978 4772 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6d2kj"] Nov 22 13:30:12 crc kubenswrapper[4772]: I1122 13:30:12.855334 4772 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6d2kj"] Nov 22 13:30:13 crc kubenswrapper[4772]: I1122 13:30:13.429476 4772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8924984b-5395-4414-8d3f-b20ba20d927b" path="/var/lib/kubelet/pods/8924984b-5395-4414-8d3f-b20ba20d927b/volumes" Nov 22 13:30:26 crc kubenswrapper[4772]: I1122 13:30:26.979643 4772 scope.go:117] "RemoveContainer" containerID="99efa35b3d173aaa59d64b4599bc9dd2574a02e59a7da98f80381f429badfe38"